|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:56:18.4162344Z by ClassTranscribe |
|
|
|
00:01:04.700 --> 00:01:05.450 |
|
All right. |
|
|
|
00:01:05.450 --> 00:01:06.810 |
|
Good morning, everybody. |
|
|
|
00:01:07.920 --> 00:01:08.950 |
|
Hope you're doing well. |
|
|
|
00:01:10.010 --> 00:01:10.940 |
|
So. |
|
|
|
00:01:11.010 --> 00:01:14.260 |
|
And so I'll jump into it. |
|
|
|
00:01:15.610 --> 00:01:16.000 |
|
All right. |
|
|
|
00:01:16.000 --> 00:01:18.966 |
|
So previously we learned about a lot of |
|
|
|
00:01:18.966 --> 00:01:21.260 |
|
different individual models, logistic |
|
|
|
00:01:21.260 --> 00:01:23.000 |
|
regression, Keenan and so on. |
|
|
|
00:01:23.000 --> 00:01:25.140 |
|
We also learned about trees that are |
|
|
|
00:01:25.140 --> 00:01:27.683 |
|
able to learn features and split the |
|
|
|
00:01:27.683 --> 00:01:29.410 |
|
feature space into different chunks and |
|
|
|
00:01:29.410 --> 00:01:31.170 |
|
then make decisions and those different |
|
|
|
00:01:31.170 --> 00:01:32.500 |
|
parts of the feature space. |
|
|
|
00:01:33.300 --> 00:01:34.980 |
|
And then in the last class we learned |
|
|
|
00:01:34.980 --> 00:01:37.884 |
|
about the bias variance tradeoff, that |
|
|
|
00:01:37.884 --> 00:01:41.310 |
|
you can have a very complex classifier |
|
|
|
00:01:41.310 --> 00:01:42.856 |
|
that requires a lot of data to learn |
|
|
|
00:01:42.856 --> 00:01:44.919 |
|
and that might have low bias that can |
|
|
|
00:01:44.920 --> 00:01:47.190 |
|
fit the training data really well, but |
|
|
|
00:01:47.190 --> 00:01:48.720 |
|
high variance that you might get |
|
|
|
00:01:48.720 --> 00:01:50.240 |
|
different classifiers with different |
|
|
|
00:01:50.240 --> 00:01:50.980 |
|
samples of data. |
|
|
|
00:01:51.730 --> 00:01:53.865 |
|
Or you can have a low bias. |
|
|
|
00:01:53.865 --> 00:01:56.170 |
|
Or you can have a high bias, low |
|
|
|
00:01:56.170 --> 00:01:59.429 |
|
variance classifier, a short tree, or a |
|
|
|
00:01:59.430 --> 00:02:01.390 |
|
linear model that might not be able to |
|
|
|
00:02:01.390 --> 00:02:03.430 |
|
fit the training data perfectly, but |
|
|
|
00:02:03.430 --> 00:02:05.648 |
|
we'll do similarly on the test data to |
|
|
|
00:02:05.648 --> 00:02:06.380 |
|
the training data. |
|
|
|
00:02:07.250 --> 00:02:10.670 |
|
And then the escape of that is using. |
|
|
|
00:02:10.670 --> 00:02:12.440 |
|
So usually you have this tradeoff where |
|
|
|
00:02:12.440 --> 00:02:14.020 |
|
you have to choose one or the other, |
|
|
|
00:02:14.020 --> 00:02:16.880 |
|
but ensembles are able to escape that |
|
|
|
00:02:16.880 --> 00:02:19.360 |
|
tradeoff by combining multiple |
|
|
|
00:02:19.360 --> 00:02:21.390 |
|
classifiers to either reduce the |
|
|
|
00:02:21.390 --> 00:02:24.070 |
|
variance of each or reduce the bias of |
|
|
|
00:02:24.070 --> 00:02:24.540 |
|
them. |
|
|
|
00:02:26.500 --> 00:02:30.250 |
|
So this is so we also we talked |
|
|
|
00:02:30.250 --> 00:02:32.400 |
|
particularly about boosted, boosted |
|
|
|
00:02:32.400 --> 00:02:34.390 |
|
trees and random forests, which are two |
|
|
|
00:02:34.390 --> 00:02:37.180 |
|
of the most powerful and widely useful |
|
|
|
00:02:37.180 --> 00:02:40.130 |
|
classifiers and regressors and machine |
|
|
|
00:02:40.130 --> 00:02:40.870 |
|
learning. |
|
|
|
00:02:40.990 --> 00:02:43.740 |
|
The other is what we're starting to get |
|
|
|
00:02:43.740 --> 00:02:44.197 |
|
into. |
|
|
|
00:02:44.197 --> 00:02:46.800 |
|
We're starting to work our way towards |
|
|
|
00:02:46.800 --> 00:02:50.030 |
|
neural networks, which as you know is |
|
|
|
00:02:50.030 --> 00:02:52.300 |
|
like the is the dominant approach right |
|
|
|
00:02:52.300 --> 00:02:53.270 |
|
now in machine learning. |
|
|
|
00:02:54.260 --> 00:02:56.530 |
|
But before we get there, I want to |
|
|
|
00:02:56.530 --> 00:03:00.630 |
|
introduce one more individual model |
|
|
|
00:03:00.630 --> 00:03:02.630 |
|
which is the support vector machine. |
|
|
|
00:03:03.410 --> 00:03:05.255 |
|
Support vector machines or SVM. |
|
|
|
00:03:05.255 --> 00:03:06.790 |
|
So usually you'll just see people call |
|
|
|
00:03:06.790 --> 00:03:08.660 |
|
it SVM without writing out the full |
|
|
|
00:03:08.660 --> 00:03:08.940 |
|
name. |
|
|
|
00:03:09.580 --> 00:03:11.652 |
|
They are developed in 1990s by Vapnik |
|
|
|
00:03:11.652 --> 00:03:15.170 |
|
and his colleagues AT&T Bell Labs and |
|
|
|
00:03:15.170 --> 00:03:16.870 |
|
it was based on statistical learning |
|
|
|
00:03:16.870 --> 00:03:18.573 |
|
theory, so that their learning theory |
|
|
|
00:03:18.573 --> 00:03:21.050 |
|
was actually developed by Vapnik and |
|
|
|
00:03:21.050 --> 00:03:23.820 |
|
independently by others as early as the |
|
|
|
00:03:23.820 --> 00:03:25.000 |
|
40s or 50s. |
|
|
|
00:03:25.860 --> 00:03:28.020 |
|
But that led to the SVM algorithm in |
|
|
|
00:03:28.020 --> 00:03:28.730 |
|
the 90s. |
|
|
|
00:03:29.840 --> 00:03:32.780 |
|
And SVMS for a while were the most |
|
|
|
00:03:32.780 --> 00:03:35.320 |
|
popular machine learning algorithm, |
|
|
|
00:03:35.320 --> 00:03:37.740 |
|
mainly because they have a really good |
|
|
|
00:03:37.740 --> 00:03:39.420 |
|
justification in terms of |
|
|
|
00:03:39.420 --> 00:03:42.500 |
|
generalization, theory, theory and they |
|
|
|
00:03:42.500 --> 00:03:44.820 |
|
can be optimized. |
|
|
|
00:03:45.420 --> 00:03:49.000 |
|
And so for a while, people felt like |
|
|
|
00:03:49.000 --> 00:03:51.100 |
|
Anna's were kind of a dead end. |
|
|
|
00:03:51.900 --> 00:03:54.400 |
|
That's artificial neural networks are a |
|
|
|
00:03:54.400 --> 00:03:56.117 |
|
dead end because they're a black box. |
|
|
|
00:03:56.117 --> 00:03:57.216 |
|
They're hard to understand, they're |
|
|
|
00:03:57.216 --> 00:03:59.980 |
|
hard to optimize, and VMS were able to |
|
|
|
00:03:59.980 --> 00:04:02.780 |
|
get like similar performance, but are |
|
|
|
00:04:02.780 --> 00:04:03.780 |
|
much better understood. |
|
|
|
00:04:06.080 --> 00:04:08.110 |
|
So SVMS are kind of worth knowing and |
|
|
|
00:04:08.110 --> 00:04:09.170 |
|
their own right? |
|
|
|
00:04:09.170 --> 00:04:12.390 |
|
But actually the main reason that I'm |
|
|
|
00:04:12.390 --> 00:04:15.440 |
|
decided to teach about SVMS is because |
|
|
|
00:04:15.440 --> 00:04:17.320 |
|
there's a lot of other concepts |
|
|
|
00:04:17.320 --> 00:04:19.710 |
|
associated with SVMS that are widely |
|
|
|
00:04:19.710 --> 00:04:21.718 |
|
applicable that are worth knowing. |
|
|
|
00:04:21.718 --> 00:04:24.245 |
|
So one is the generalization properties |
|
|
|
00:04:24.245 --> 00:04:26.370 |
|
that they try to, for example, achieve |
|
|
|
00:04:26.370 --> 00:04:27.240 |
|
a big margin. |
|
|
|
00:04:27.240 --> 00:04:28.670 |
|
I'll explain what that means and. |
|
|
|
00:04:29.460 --> 00:04:31.400 |
|
And have a decision that relies on |
|
|
|
00:04:31.400 --> 00:04:33.700 |
|
limited training data, which is called |
|
|
|
00:04:33.700 --> 00:04:35.250 |
|
structural risk minimization. |
|
|
|
00:04:36.110 --> 00:04:38.560 |
|
Another is you can incorporate the idea |
|
|
|
00:04:38.560 --> 00:04:40.800 |
|
of kernels, which is that you can |
|
|
|
00:04:40.800 --> 00:04:44.670 |
|
define how 2 examples are similar and |
|
|
|
00:04:44.670 --> 00:04:48.470 |
|
then use that as a basis of training a |
|
|
|
00:04:48.470 --> 00:04:48.930 |
|
model. |
|
|
|
00:04:49.660 --> 00:04:51.370 |
|
And related to that. |
|
|
|
00:04:52.120 --> 00:04:54.940 |
|
We can see how you can formulate the |
|
|
|
00:04:54.940 --> 00:04:56.600 |
|
same problem in different ways. |
|
|
|
00:04:56.600 --> 00:04:59.560 |
|
So for SVMS, you can formulate it in |
|
|
|
00:04:59.560 --> 00:05:01.550 |
|
what's called the primal, which just |
|
|
|
00:05:01.550 --> 00:05:04.050 |
|
means that for a linear model you're |
|
|
|
00:05:04.050 --> 00:05:06.259 |
|
saying that the model is a of all the |
|
|
|
00:05:06.260 --> 00:05:07.030 |
|
features. |
|
|
|
00:05:07.030 --> 00:05:09.670 |
|
Or you can formulate it in the dual, |
|
|
|
00:05:09.670 --> 00:05:12.180 |
|
which is that you say that the weights |
|
|
|
00:05:12.180 --> 00:05:14.485 |
|
are actually a sum of all the training |
|
|
|
00:05:14.485 --> 00:05:16.220 |
|
examples, a of all the training |
|
|
|
00:05:16.220 --> 00:05:16.509 |
|
examples. |
|
|
|
00:05:17.300 --> 00:05:18.340 |
|
And I think it's just kind of |
|
|
|
00:05:18.340 --> 00:05:19.230 |
|
interesting that. |
|
|
|
00:05:20.170 --> 00:05:21.910 |
|
You can show that for many linear |
|
|
|
00:05:21.910 --> 00:05:24.010 |
|
models, we tend to think of them as |
|
|
|
00:05:24.010 --> 00:05:26.150 |
|
like it's that the linear model |
|
|
|
00:05:26.150 --> 00:05:27.780 |
|
corresponds to feature importance, and |
|
|
|
00:05:27.780 --> 00:05:28.940 |
|
you're learning a value for each |
|
|
|
00:05:28.940 --> 00:05:33.680 |
|
feature, which is true, but the optimal |
|
|
|
00:05:33.680 --> 00:05:36.570 |
|
linear model can often be expressed as |
|
|
|
00:05:36.570 --> 00:05:38.575 |
|
just a combination of the training |
|
|
|
00:05:38.575 --> 00:05:39.740 |
|
examples directly a weighted |
|
|
|
00:05:39.740 --> 00:05:41.250 |
|
combination of the training examples. |
|
|
|
00:05:41.870 --> 00:05:43.770 |
|
So it gives an interesting perspective |
|
|
|
00:05:43.770 --> 00:05:44.660 |
|
I think. |
|
|
|
00:05:44.660 --> 00:05:46.630 |
|
And then finally there's an |
|
|
|
00:05:46.630 --> 00:05:49.240 |
|
optimization method for SVMS that was |
|
|
|
00:05:49.240 --> 00:05:52.430 |
|
proposed that is called sub gradient, |
|
|
|
00:05:52.430 --> 00:05:54.520 |
|
subgradient method and. |
|
|
|
00:05:55.260 --> 00:05:57.150 |
|
Particularly it's called the general |
|
|
|
00:05:57.150 --> 00:05:58.680 |
|
method is called sarcastic gradient |
|
|
|
00:05:58.680 --> 00:06:01.780 |
|
descent and this is how optimization is |
|
|
|
00:06:01.780 --> 00:06:03.380 |
|
done for neural networks. |
|
|
|
00:06:03.380 --> 00:06:05.450 |
|
So I wanted to introduce it in the case |
|
|
|
00:06:05.450 --> 00:06:08.310 |
|
of the SVMS where it's a little bit |
|
|
|
00:06:08.310 --> 00:06:10.530 |
|
simpler before I get into. |
|
|
|
00:06:11.480 --> 00:06:15.050 |
|
Perceptrons and MLPS multilayer |
|
|
|
00:06:15.050 --> 00:06:15.780 |
|
perceptrons. |
|
|
|
00:06:18.250 --> 00:06:21.980 |
|
So there's so there's three parts of |
|
|
|
00:06:21.980 --> 00:06:22.620 |
|
this lecture. |
|
|
|
00:06:22.620 --> 00:06:24.290 |
|
First, I'm going to talk about linear |
|
|
|
00:06:24.290 --> 00:06:24.850 |
|
SVMS. |
|
|
|
00:06:25.560 --> 00:06:27.660 |
|
And then I'm going to talk about |
|
|
|
00:06:27.660 --> 00:06:29.900 |
|
kernels and nonlinear SVMS. |
|
|
|
00:06:30.550 --> 00:06:33.000 |
|
And then finally the SVM optimization |
|
|
|
00:06:33.000 --> 00:06:34.010 |
|
and. |
|
|
|
00:06:34.700 --> 00:06:36.560 |
|
I might not get to the third part |
|
|
|
00:06:36.560 --> 00:06:39.160 |
|
today, we'll see, but I don't want to |
|
|
|
00:06:39.160 --> 00:06:40.395 |
|
rush it too much. |
|
|
|
00:06:40.395 --> 00:06:43.090 |
|
But even if not, this leads naturally |
|
|
|
00:06:43.090 --> 00:06:45.040 |
|
into the next lecture, which would |
|
|
|
00:06:45.040 --> 00:06:47.220 |
|
basically be SGD on perceptrons, so. |
|
|
|
00:06:51.360 --> 00:06:55.065 |
|
Alright, so SVMS are kind of pose a |
|
|
|
00:06:55.065 --> 00:06:56.390 |
|
different answer to what's the best |
|
|
|
00:06:56.390 --> 00:06:57.710 |
|
linear classifier. |
|
|
|
00:06:57.710 --> 00:07:00.625 |
|
As we discussed previously, if you have |
|
|
|
00:07:00.625 --> 00:07:03.260 |
|
a set of linearly separated data, these |
|
|
|
00:07:03.260 --> 00:07:05.540 |
|
Red X's and Green OS, then there's |
|
|
|
00:07:05.540 --> 00:07:06.939 |
|
actually a bunch of different linear |
|
|
|
00:07:06.940 --> 00:07:09.610 |
|
models that could separate the X's from |
|
|
|
00:07:09.610 --> 00:07:10.170 |
|
the O's. |
|
|
|
00:07:11.540 --> 00:07:13.860 |
|
So logistic regression has one way of |
|
|
|
00:07:13.860 --> 00:07:16.240 |
|
choosing the best model, which is |
|
|
|
00:07:16.240 --> 00:07:18.020 |
|
you're maximizing the expected log |
|
|
|
00:07:18.020 --> 00:07:20.564 |
|
likelihood of the labels given the |
|
|
|
00:07:20.564 --> 00:07:20.934 |
|
data. |
|
|
|
00:07:20.934 --> 00:07:23.620 |
|
So for given some boundary, it implies |
|
|
|
00:07:23.620 --> 00:07:25.414 |
|
some probability for each of the data |
|
|
|
00:07:25.414 --> 00:07:25.612 |
|
points. |
|
|
|
00:07:25.612 --> 00:07:26.990 |
|
The data points that are really far |
|
|
|
00:07:26.990 --> 00:07:29.260 |
|
from the boundary have like a really |
|
|
|
00:07:29.260 --> 00:07:30.970 |
|
high confidence, and if that's correct, |
|
|
|
00:07:30.970 --> 00:07:32.962 |
|
it means they have a low loss, and |
|
|
|
00:07:32.962 --> 00:07:36.025 |
|
labels that are on the wrong side of |
|
|
|
00:07:36.025 --> 00:07:37.475 |
|
the boundary or close to the boundary |
|
|
|
00:07:37.475 --> 00:07:38.650 |
|
have a higher loss. |
|
|
|
00:07:39.880 --> 00:07:42.870 |
|
And so as a result of that objective, |
|
|
|
00:07:42.870 --> 00:07:45.580 |
|
the logistic regression depends on all |
|
|
|
00:07:45.580 --> 00:07:46.760 |
|
the training examples. |
|
|
|
00:07:46.760 --> 00:07:48.550 |
|
Even examples that are very confidently |
|
|
|
00:07:48.550 --> 00:07:51.270 |
|
correct will contribute a little bit to |
|
|
|
00:07:51.270 --> 00:07:53.470 |
|
the loss of the optimization. |
|
|
|
00:07:54.980 --> 00:07:57.210 |
|
On the other hand, SVM makes a very |
|
|
|
00:07:57.210 --> 00:07:59.010 |
|
different kind of decision. |
|
|
|
00:07:59.010 --> 00:08:02.455 |
|
So SVM the goal is to make all of the |
|
|
|
00:08:02.455 --> 00:08:04.545 |
|
examples at least minimally confident. |
|
|
|
00:08:04.545 --> 00:08:06.800 |
|
So you want all the examples to be at |
|
|
|
00:08:06.800 --> 00:08:08.560 |
|
least some distance from the boundary. |
|
|
|
00:08:09.770 --> 00:08:11.430 |
|
And then the decision is based on a |
|
|
|
00:08:11.430 --> 00:08:14.040 |
|
minimum set of examples, so that even |
|
|
|
00:08:14.040 --> 00:08:15.875 |
|
if you were to remove a lot of the |
|
|
|
00:08:15.875 --> 00:08:17.243 |
|
examples that want to actually change |
|
|
|
00:08:17.243 --> 00:08:17.929 |
|
the decision. |
|
|
|
00:08:22.350 --> 00:08:24.840 |
|
So this is so there's a little bit of |
|
|
|
00:08:24.840 --> 00:08:26.980 |
|
terminology that comes with SVMS that's |
|
|
|
00:08:26.980 --> 00:08:29.860 |
|
worth being careful about. |
|
|
|
00:08:30.600 --> 00:08:31.960 |
|
One is the margin. |
|
|
|
00:08:31.960 --> 00:08:34.680 |
|
So the margin is just the distance from |
|
|
|
00:08:34.680 --> 00:08:36.950 |
|
the boundary of an example. |
|
|
|
00:08:36.950 --> 00:08:39.530 |
|
So in this case this is an SVM fit to |
|
|
|
00:08:39.530 --> 00:08:43.030 |
|
these examples and this is like the |
|
|
|
00:08:43.030 --> 00:08:45.479 |
|
minimum margin of any of the examples. |
|
|
|
00:08:45.480 --> 00:08:47.488 |
|
But the margin is just the distance |
|
|
|
00:08:47.488 --> 00:08:49.330 |
|
from this boundary in the correct |
|
|
|
00:08:49.330 --> 00:08:49.732 |
|
direction. |
|
|
|
00:08:49.732 --> 00:08:53.180 |
|
So if an ex were over here, it would |
|
|
|
00:08:53.180 --> 00:08:55.985 |
|
have like a negative margin because it |
|
|
|
00:08:55.985 --> 00:08:57.629 |
|
would be on the wrong side of the |
|
|
|
00:08:57.629 --> 00:09:00.130 |
|
boundary and if X is really far in this |
|
|
|
00:09:00.130 --> 00:09:00.450 |
|
direction. |
|
|
|
00:09:00.510 --> 00:09:04.050 |
|
Then it has a high positive margin. |
|
|
|
00:09:04.900 --> 00:09:07.935 |
|
And the margin is normalized by the |
|
|
|
00:09:07.935 --> 00:09:09.380 |
|
weight length. |
|
|
|
00:09:09.380 --> 00:09:11.490 |
|
This is the L2 length of the weight. |
|
|
|
00:09:13.340 --> 00:09:17.530 |
|
Because if you were if the data is |
|
|
|
00:09:17.530 --> 00:09:20.140 |
|
linearly separable and you arbitrarily |
|
|
|
00:09:20.140 --> 00:09:21.940 |
|
if you just like increase W if you |
|
|
|
00:09:21.940 --> 00:09:26.440 |
|
multiply it by 1000 then this then the |
|
|
|
00:09:26.440 --> 00:09:29.155 |
|
score of each data point will just |
|
|
|
00:09:29.155 --> 00:09:31.640 |
|
linearly increase with the length of W |
|
|
|
00:09:31.640 --> 00:09:33.329 |
|
so you need to normalize it by W. |
|
|
|
00:09:34.280 --> 00:09:36.960 |
|
So mathematically the margin is just. |
|
|
|
00:09:36.960 --> 00:09:40.170 |
|
This is the linear model W transpose X |
|
|
|
00:09:40.170 --> 00:09:42.920 |
|
of weights times X plus some bias term |
|
|
|
00:09:42.920 --> 00:09:43.150 |
|
B. |
|
|
|
00:09:44.460 --> 00:09:47.820 |
|
I just want to note that bias term like |
|
|
|
00:09:47.820 --> 00:09:50.131 |
|
in this context is not the same as |
|
|
|
00:09:50.131 --> 00:09:51.756 |
|
classifier bias like. |
|
|
|
00:09:51.756 --> 00:09:54.440 |
|
Classifier bias means that you can't |
|
|
|
00:09:54.440 --> 00:09:57.110 |
|
fit like some kinds of decision |
|
|
|
00:09:57.110 --> 00:10:00.000 |
|
boundaries, but the bias term is just |
|
|
|
00:10:00.000 --> 00:10:02.260 |
|
adding a constant to your prediction. |
|
|
|
00:10:04.440 --> 00:10:06.470 |
|
So we have a linear model here. |
|
|
|
00:10:06.470 --> 00:10:07.605 |
|
It gets multiplied by Y. |
|
|
|
00:10:07.605 --> 00:10:09.420 |
|
So in other words, if this is positive |
|
|
|
00:10:09.420 --> 00:10:10.930 |
|
then I made a correct decision. |
|
|
|
00:10:11.510 --> 00:10:13.660 |
|
And if this is negative, then I made an |
|
|
|
00:10:13.660 --> 00:10:14.660 |
|
incorrect decision. |
|
|
|
00:10:14.660 --> 00:10:17.280 |
|
If Y is -, 1 for example, but the model |
|
|
|
00:10:17.280 --> 00:10:20.840 |
|
predicts A2, then this will be -, 2 and |
|
|
|
00:10:20.840 --> 00:10:22.710 |
|
that's that means that I'm like kind of |
|
|
|
00:10:22.710 --> 00:10:24.010 |
|
confidently incorrect. |
|
|
|
00:10:26.690 --> 00:10:27.530 |
|
OK. |
|
|
|
00:10:27.530 --> 00:10:30.575 |
|
And then the second term is a support |
|
|
|
00:10:30.575 --> 00:10:32.490 |
|
vector, so support vector machines that |
|
|
|
00:10:32.490 --> 00:10:33.740 |
|
has it in the title. |
|
|
|
00:10:33.740 --> 00:10:36.370 |
|
A support vector is an example that |
|
|
|
00:10:36.370 --> 00:10:41.290 |
|
lies on the margin of 1, so on that |
|
|
|
00:10:41.290 --> 00:10:42.250 |
|
minimum margin. |
|
|
|
00:10:43.100 --> 00:10:45.480 |
|
So the points that lie within a margin |
|
|
|
00:10:45.480 --> 00:10:47.200 |
|
of one are the support vectors, and |
|
|
|
00:10:47.200 --> 00:10:48.800 |
|
actually the decision only depends on |
|
|
|
00:10:48.800 --> 00:10:50.310 |
|
those support vectors at the end. |
|
|
|
00:10:53.170 --> 00:10:56.140 |
|
So the objective of the SVM is to try |
|
|
|
00:10:56.140 --> 00:10:59.080 |
|
to minimize the sum of squared weights |
|
|
|
00:10:59.080 --> 00:11:01.970 |
|
while preserving a margin of 1 S you |
|
|
|
00:11:01.970 --> 00:11:05.340 |
|
could also cast it as that your weight |
|
|
|
00:11:05.340 --> 00:11:06.930 |
|
vector is constrained to be unit |
|
|
|
00:11:06.930 --> 00:11:08.515 |
|
length, but you want to maximize the |
|
|
|
00:11:08.515 --> 00:11:08.770 |
|
margin. |
|
|
|
00:11:08.770 --> 00:11:11.590 |
|
Those are just equivalent formulations. |
|
|
|
00:11:13.240 --> 00:11:15.740 |
|
So here's so here's an example of an |
|
|
|
00:11:15.740 --> 00:11:16.640 |
|
optimized model. |
|
|
|
00:11:16.640 --> 00:11:18.560 |
|
Now here I added like a big probability |
|
|
|
00:11:18.560 --> 00:11:21.470 |
|
mass of X's over here, and note that |
|
|
|
00:11:21.470 --> 00:11:23.450 |
|
the SVM doesn't care about them at all. |
|
|
|
00:11:23.450 --> 00:11:25.680 |
|
It only cares about these examples that |
|
|
|
00:11:25.680 --> 00:11:27.720 |
|
are really close to this decision |
|
|
|
00:11:27.720 --> 00:11:29.769 |
|
boundary between the O's and the ex's. |
|
|
|
00:11:30.420 --> 00:11:35.060 |
|
So these three examples that are an |
|
|
|
00:11:35.060 --> 00:11:37.760 |
|
equidistant from the decision boundary |
|
|
|
00:11:37.760 --> 00:11:39.717 |
|
have they have like determined the |
|
|
|
00:11:39.717 --> 00:11:40.193 |
|
decision boundary. |
|
|
|
00:11:40.193 --> 00:11:42.094 |
|
These are the X's that are closest to |
|
|
|
00:11:42.094 --> 00:11:44.320 |
|
the O's and the O that's closest to the |
|
|
|
00:11:44.320 --> 00:11:46.260 |
|
X's, while these ones that are have a |
|
|
|
00:11:46.260 --> 00:11:48.280 |
|
higher margin have not influenced the |
|
|
|
00:11:48.280 --> 00:11:48.960 |
|
decision boundary. |
|
|
|
00:11:51.590 --> 00:11:55.200 |
|
In fact, if you have a two, if the data |
|
|
|
00:11:55.200 --> 00:11:58.532 |
|
is linearly separable and you have two |
|
|
|
00:11:58.532 --> 00:12:00.140 |
|
two-dimensional features like I have |
|
|
|
00:12:00.140 --> 00:12:02.690 |
|
here, these are the features X1 and X2, |
|
|
|
00:12:02.690 --> 00:12:04.580 |
|
then there will always be 3 support |
|
|
|
00:12:04.580 --> 00:12:05.890 |
|
vectors. |
|
|
|
00:12:05.890 --> 00:12:06.340 |
|
Question. |
|
|
|
00:12:08.680 --> 00:12:10.170 |
|
So yeah, good question. |
|
|
|
00:12:10.170 --> 00:12:12.900 |
|
So the decision boundary is if the |
|
|
|
00:12:12.900 --> 00:12:15.207 |
|
features are on one side of the |
|
|
|
00:12:15.207 --> 00:12:16.656 |
|
boundary, then it's going to be one |
|
|
|
00:12:16.656 --> 00:12:18.155 |
|
class, and if they're on the other side |
|
|
|
00:12:18.155 --> 00:12:19.718 |
|
of the boundary then it will be the |
|
|
|
00:12:19.718 --> 00:12:20.210 |
|
other class. |
|
|
|
00:12:21.130 --> 00:12:23.380 |
|
And in terms of the linear model, if |
|
|
|
00:12:23.380 --> 00:12:26.850 |
|
you have your model is W transpose X + |
|
|
|
00:12:26.850 --> 00:12:29.435 |
|
B, so it's like a of the features plus |
|
|
|
00:12:29.435 --> 00:12:30.260 |
|
the bias term. |
|
|
|
00:12:31.120 --> 00:12:33.030 |
|
The decision boundary is where that |
|
|
|
00:12:33.030 --> 00:12:34.440 |
|
value is 0. |
|
|
|
00:12:34.440 --> 00:12:37.610 |
|
So if this value W transpose X + B. |
|
|
|
00:12:38.300 --> 00:12:40.460 |
|
Is greater than zero, then you're |
|
|
|
00:12:40.460 --> 00:12:43.060 |
|
predicting that the label is 1, and if |
|
|
|
00:12:43.060 --> 00:12:45.225 |
|
this is less than zero, then you're |
|
|
|
00:12:45.225 --> 00:12:48.136 |
|
predicting that the label is -, 1, and |
|
|
|
00:12:48.136 --> 00:12:49.990 |
|
if it's equal to 0, then you're right |
|
|
|
00:12:49.990 --> 00:12:51.750 |
|
on the boundary of that decision. |
|
|
|
00:12:52.590 --> 00:12:53.940 |
|
There's a help. |
|
|
|
00:12:55.640 --> 00:12:56.310 |
|
Yeah. |
|
|
|
00:13:03.470 --> 00:13:04.010 |
|
If. |
|
|
|
00:13:04.090 --> 00:13:04.740 |
|
And. |
|
|
|
00:13:05.910 --> 00:13:08.550 |
|
So if the so the decision boundary |
|
|
|
00:13:08.550 --> 00:13:10.320 |
|
actually it kind of it's not shown |
|
|
|
00:13:10.320 --> 00:13:11.390 |
|
here, but it also kind of as a |
|
|
|
00:13:11.390 --> 00:13:11.910 |
|
direction. |
|
|
|
00:13:12.590 --> 00:13:14.958 |
|
So if things are on one side of the |
|
|
|
00:13:14.958 --> 00:13:16.490 |
|
boundary then they would be X's, and if |
|
|
|
00:13:16.490 --> 00:13:17.600 |
|
they're on the other side of the |
|
|
|
00:13:17.600 --> 00:13:18.740 |
|
boundary then they'd be OS. |
|
|
|
00:13:20.890 --> 00:13:22.920 |
|
And the boundary is fit to this data, |
|
|
|
00:13:22.920 --> 00:13:25.775 |
|
so it's solved for in a way that this |
|
|
|
00:13:25.775 --> 00:13:26.620 |
|
is true. |
|
|
|
00:13:29.100 --> 00:13:30.080 |
|
Question. |
|
|
|
00:13:30.080 --> 00:13:30.830 |
|
So how? |
|
|
|
00:13:31.830 --> 00:13:35.420 |
|
Will perform when two data set are |
|
|
|
00:13:35.420 --> 00:13:37.020 |
|
merged with each other, like when |
|
|
|
00:13:37.020 --> 00:13:40.300 |
|
they're not separated, separable, |
|
|
|
00:13:40.300 --> 00:13:41.310 |
|
mostly separable. |
|
|
|
00:13:41.560 --> 00:13:43.020 |
|
They have a lot of emerging. |
|
|
|
00:13:43.020 --> 00:13:44.902 |
|
Yeah, I'll get to that. |
|
|
|
00:13:44.902 --> 00:13:45.220 |
|
Yeah. |
|
|
|
00:13:45.220 --> 00:13:46.500 |
|
For now, I'm just dealing with this |
|
|
|
00:13:46.500 --> 00:13:48.410 |
|
separable case where they can be |
|
|
|
00:13:48.410 --> 00:13:49.906 |
|
perfectly classified. |
|
|
|
00:13:49.906 --> 00:13:53.510 |
|
So the linear logistic regression |
|
|
|
00:13:53.510 --> 00:13:55.379 |
|
behaves differently because it wants, |
|
|
|
00:13:55.380 --> 00:13:57.240 |
|
these are a lot of data points and they |
|
|
|
00:13:57.240 --> 00:14:00.760 |
|
will all have some loss even if they're |
|
|
|
00:14:00.760 --> 00:14:02.314 |
|
like further away than other data |
|
|
|
00:14:02.314 --> 00:14:03.372 |
|
points from the boundary. |
|
|
|
00:14:03.372 --> 00:14:05.230 |
|
And so it wants them all to be really |
|
|
|
00:14:05.230 --> 00:14:06.770 |
|
far from the boundary so that they're |
|
|
|
00:14:06.770 --> 00:14:08.390 |
|
not incurring a lot of loss in total. |
|
|
|
00:14:09.260 --> 00:14:11.633 |
|
So the linear logistic regression will |
|
|
|
00:14:11.633 --> 00:14:13.880 |
|
push the line push the decision |
|
|
|
00:14:13.880 --> 00:14:16.139 |
|
boundary away from this cluster of X's, |
|
|
|
00:14:16.140 --> 00:14:17.970 |
|
even if it means that it has to be |
|
|
|
00:14:17.970 --> 00:14:19.810 |
|
closer to one of the other ex's. |
|
|
|
00:14:20.810 --> 00:14:22.650 |
|
And in some sense, this is a reasonable |
|
|
|
00:14:22.650 --> 00:14:25.260 |
|
thing to do, because it makes your |
|
|
|
00:14:25.260 --> 00:14:27.210 |
|
improves your overall average |
|
|
|
00:14:27.210 --> 00:14:29.010 |
|
confidence in the correct label. |
|
|
|
00:14:29.740 --> 00:14:32.143 |
|
Your average correct log confidence to |
|
|
|
00:14:32.143 --> 00:14:33.310 |
|
be precise. |
|
|
|
00:14:33.310 --> 00:14:37.100 |
|
But in another sense it's not so good |
|
|
|
00:14:37.100 --> 00:14:38.640 |
|
because if you're if at the end of the |
|
|
|
00:14:38.640 --> 00:14:39.930 |
|
day you're trying to minimize your |
|
|
|
00:14:39.930 --> 00:14:42.230 |
|
classification error, they're very well |
|
|
|
00:14:42.230 --> 00:14:44.380 |
|
could be other ex's that are in the |
|
|
|
00:14:44.380 --> 00:14:46.570 |
|
test data that are around this point, |
|
|
|
00:14:46.570 --> 00:14:47.940 |
|
and some of them might end up on the |
|
|
|
00:14:47.940 --> 00:14:49.040 |
|
wrong side of the boundary. |
|
|
|
00:14:56.200 --> 00:14:59.150 |
|
So this is the basic idea of the SVM, |
|
|
|
00:14:59.150 --> 00:15:01.870 |
|
and the reason that SVMS are so popular |
|
|
|
00:15:01.870 --> 00:15:04.380 |
|
is because they have really good |
|
|
|
00:15:04.380 --> 00:15:05.590 |
|
marginalization. |
|
|
|
00:15:05.590 --> 00:15:07.550 |
|
I mean really good generalization |
|
|
|
00:15:07.550 --> 00:15:09.130 |
|
guarantees. |
|
|
|
00:15:10.130 --> 00:15:14.360 |
|
So there's like 2 main Principles, 2 |
|
|
|
00:15:14.360 --> 00:15:16.720 |
|
main reasons that they generalize, and |
|
|
|
00:15:16.720 --> 00:15:18.380 |
|
again generalize means that they will |
|
|
|
00:15:18.380 --> 00:15:20.470 |
|
perform similarly to the test data |
|
|
|
00:15:20.470 --> 00:15:21.700 |
|
compared to the training data. |
|
|
|
00:15:24.090 --> 00:15:26.030 |
|
One is that maximizing the margin. |
|
|
|
00:15:26.030 --> 00:15:28.320 |
|
So if all the examples are far from the |
|
|
|
00:15:28.320 --> 00:15:30.570 |
|
margin, then you can be confident that |
|
|
|
00:15:30.570 --> 00:15:31.770 |
|
other samples from the same |
|
|
|
00:15:31.770 --> 00:15:33.896 |
|
distribution are probably also going to |
|
|
|
00:15:33.896 --> 00:15:35.600 |
|
be correct on the correct side of the |
|
|
|
00:15:35.600 --> 00:15:36.010 |
|
boundary. |
|
|
|
00:15:38.430 --> 00:15:41.410 |
|
The second thing is that it doesn't |
|
|
|
00:15:41.410 --> 00:15:43.380 |
|
depend on a lot of training samples. |
|
|
|
00:15:44.630 --> 00:15:48.810 |
|
So even if most of these X's and O's |
|
|
|
00:15:48.810 --> 00:15:50.640 |
|
disappeared, as long as these three |
|
|
|
00:15:50.640 --> 00:15:52.150 |
|
examples were here, you would end up |
|
|
|
00:15:52.150 --> 00:15:53.270 |
|
fitting the same boundary. |
|
|
|
00:15:54.170 --> 00:15:56.630 |
|
And so for example one way that you can |
|
|
|
00:15:56.630 --> 00:16:00.110 |
|
measure the that you can get an |
|
|
|
00:16:00.110 --> 00:16:02.830 |
|
estimate of your test error is to do |
|
|
|
00:16:02.830 --> 00:16:04.120 |
|
leave one out cross validation. |
|
|
|
00:16:04.120 --> 00:16:06.310 |
|
Which is when you remove one data point |
|
|
|
00:16:06.310 --> 00:16:08.570 |
|
from the training set and then train a |
|
|
|
00:16:08.570 --> 00:16:10.715 |
|
model and then test it on that left out |
|
|
|
00:16:10.715 --> 00:16:12.370 |
|
point and then you keep on changing |
|
|
|
00:16:12.370 --> 00:16:13.350 |
|
which point is left out. |
|
|
|
00:16:13.960 --> 00:16:15.290 |
|
If you do leave one out cross |
|
|
|
00:16:15.290 --> 00:16:17.370 |
|
validation on this, then if you leave |
|
|
|
00:16:17.370 --> 00:16:19.286 |
|
out any of these points that are not on |
|
|
|
00:16:19.286 --> 00:16:20.690 |
|
the margin, that you're going to get |
|
|
|
00:16:20.690 --> 00:16:23.710 |
|
them correct, because the boundary will |
|
|
|
00:16:23.710 --> 00:16:25.440 |
|
be defined by only these three points |
|
|
|
00:16:25.440 --> 00:16:26.050 |
|
anyway. |
|
|
|
00:16:26.050 --> 00:16:27.779 |
|
In other words, leaving out any of |
|
|
|
00:16:27.779 --> 00:16:29.130 |
|
these points not on the margin won't |
|
|
|
00:16:29.130 --> 00:16:31.840 |
|
change the boundary, and so if they're |
|
|
|
00:16:31.840 --> 00:16:33.140 |
|
correct in training, they'll also be |
|
|
|
00:16:33.140 --> 00:16:34.040 |
|
corrected in testing. |
|
|
|
00:16:35.290 --> 00:16:36.780 |
|
So that leads to this. |
|
|
|
00:16:36.850 --> 00:16:38.905 |
|
On this there's a. |
|
|
|
00:16:38.905 --> 00:16:42.170 |
|
There's a proof here of the expected |
|
|
|
00:16:42.170 --> 00:16:43.000 |
|
test error. |
|
|
|
00:16:43.690 --> 00:16:45.425 |
|
A bound on the expected test error. |
|
|
|
00:16:45.425 --> 00:16:47.120 |
|
So the expected test error will be no |
|
|
|
00:16:47.120 --> 00:16:49.430 |
|
more than the percent of training |
|
|
|
00:16:49.430 --> 00:16:51.360 |
|
samples that are support vectors. |
|
|
|
00:16:51.360 --> 00:16:53.440 |
|
So in this case it would be 3 divided |
|
|
|
00:16:53.440 --> 00:16:55.110 |
|
by the total number of training points. |
|
|
|
00:16:56.250 --> 00:17:00.253 |
|
Or if it's or, it could be also smaller |
|
|
|
00:17:00.253 --> 00:17:00.486 |
|
than. |
|
|
|
00:17:00.486 --> 00:17:02.140 |
|
It will be smaller than the smallest of |
|
|
|
00:17:02.140 --> 00:17:02.760 |
|
these. |
|
|
|
00:17:03.910 --> 00:17:08.460 |
|
The D squared is like the smallest, the |
|
|
|
00:17:08.460 --> 00:17:11.040 |
|
diameter of the smallest ball that |
|
|
|
00:17:11.040 --> 00:17:11.470 |
|
contains. |
|
|
|
00:17:11.470 --> 00:17:13.161 |
|
It's a square of the diameter of the |
|
|
|
00:17:13.161 --> 00:17:14.370 |
|
smallest ball that contains all these |
|
|
|
00:17:14.370 --> 00:17:14.730 |
|
points. |
|
|
|
00:17:15.420 --> 00:17:16.620 |
|
Compared to the margin. |
|
|
|
00:17:16.620 --> 00:17:18.853 |
|
So in other words, if the data, if the |
|
|
|
00:17:18.853 --> 00:17:20.695 |
|
margin is like pretty big compared to |
|
|
|
00:17:20.695 --> 00:17:22.340 |
|
the general variance of the data |
|
|
|
00:17:22.340 --> 00:17:24.580 |
|
points, then you're going to have a |
|
|
|
00:17:24.580 --> 00:17:27.950 |
|
small test error and that proves a lot |
|
|
|
00:17:27.950 --> 00:17:28.950 |
|
more complicated. |
|
|
|
00:17:28.950 --> 00:17:30.930 |
|
So it's at the link though, yeah? |
|
|
|
00:17:33.500 --> 00:17:36.120 |
|
We find that the support vector through |
|
|
|
00:17:36.120 --> 00:17:38.430 |
|
operation, so I will get to the |
|
|
|
00:17:38.430 --> 00:17:40.280 |
|
optimization too, yeah. |
|
|
|
00:17:41.500 --> 00:17:42.160 |
|
Some. |
|
|
|
00:17:42.160 --> 00:17:44.290 |
|
There's actually many ways to solve it, |
|
|
|
00:17:44.290 --> 00:17:46.960 |
|
and in the third part I'll talk about. |
|
|
|
00:17:47.960 --> 00:17:51.920 |
|
What is a stochastic gradient descent? |
|
|
|
00:17:51.920 --> 00:17:56.200 |
|
Which is the most the fastest way and |
|
|
|
00:17:56.200 --> 00:17:58.080 |
|
probably the preferred way right now, |
|
|
|
00:17:58.080 --> 00:17:58.380 |
|
yeah? |
|
|
|
00:18:14.040 --> 00:18:17.385 |
|
So you could say, I think that you |
|
|
|
00:18:17.385 --> 00:18:18.880 |
|
could pose, I think you could |
|
|
|
00:18:18.880 --> 00:18:19.810 |
|
equivalently. |
|
|
|
00:18:20.470 --> 00:18:23.580 |
|
Pose the problem as you want to. |
|
|
|
00:18:23.790 --> 00:18:24.410 |
|
|
|
|
|
00:18:25.410 --> 00:18:26.182 |
|
Maximum. |
|
|
|
00:18:26.182 --> 00:18:31.216 |
|
So this distance here is like the is |
|
|
|
00:18:31.216 --> 00:18:35.140 |
|
the West transpose X + b * Y. |
|
|
|
00:18:35.140 --> 00:18:38.950 |
|
So in other words, if WTX is very far |
|
|
|
00:18:38.950 --> 00:18:40.550 |
|
from the boundary then you have a high |
|
|
|
00:18:40.550 --> 00:18:43.414 |
|
margin that's like the distance in this |
|
|
|
00:18:43.414 --> 00:18:44.560 |
|
like plotted space. |
|
|
|
00:18:45.920 --> 00:18:48.440 |
|
And if you just like arbitrarily |
|
|
|
00:18:48.440 --> 00:18:50.380 |
|
increase W, then that distance is going |
|
|
|
00:18:50.380 --> 00:18:52.030 |
|
to increase because you're multiplying |
|
|
|
00:18:52.030 --> 00:18:54.050 |
|
X by a larger number for each of your |
|
|
|
00:18:54.050 --> 00:18:54.460 |
|
weights. |
|
|
|
00:18:55.160 --> 00:18:57.523 |
|
And so you need to kind of normal, you |
|
|
|
00:18:57.523 --> 00:19:00.000 |
|
need to in some way normalize for the |
|
|
|
00:19:00.000 --> 00:19:00.705 |
|
weight length. |
|
|
|
00:19:00.705 --> 00:19:03.159 |
|
And one way to do that is to say you |
|
|
|
00:19:03.160 --> 00:19:05.293 |
|
could say that I'm going to fix my |
|
|
|
00:19:05.293 --> 00:19:07.740 |
|
weights to be unit length that they |
|
|
|
00:19:07.740 --> 00:19:09.730 |
|
have to their weights can't just get |
|
|
|
00:19:09.730 --> 00:19:10.824 |
|
like arbitrarily bigger. |
|
|
|
00:19:10.824 --> 00:19:13.390 |
|
And I'm going to try to make the margin |
|
|
|
00:19:13.390 --> 00:19:14.820 |
|
as big as possible given that. |
|
|
|
00:19:15.790 --> 00:19:18.812 |
|
But I probably just first for. |
|
|
|
00:19:18.812 --> 00:19:20.250 |
|
It's probably just an easier |
|
|
|
00:19:20.250 --> 00:19:21.250 |
|
optimization problem. |
|
|
|
00:19:21.250 --> 00:19:23.100 |
|
I'm not sure exactly why, but it's |
|
|
|
00:19:23.100 --> 00:19:25.380 |
|
usually posed as you want to minimize |
|
|
|
00:19:25.380 --> 00:19:26.550 |
|
the length of the weights. |
|
|
|
00:19:27.250 --> 00:19:29.180 |
|
While maintaining that the margin is 1. |
|
|
|
00:19:29.910 --> 00:19:32.079 |
|
And I think that it may be that this |
|
|
|
00:19:32.080 --> 00:19:34.690 |
|
lends itself better to so. |
|
|
|
00:19:34.690 --> 00:19:36.260 |
|
I haven't talked about it yet, but to |
|
|
|
00:19:36.260 --> 00:19:38.120 |
|
when you have when the data is not |
|
|
|
00:19:38.120 --> 00:19:40.295 |
|
linearly separable, then it's very easy |
|
|
|
00:19:40.295 --> 00:19:42.677 |
|
to modify this objective to account for |
|
|
|
00:19:42.677 --> 00:19:44.410 |
|
the data that can't be correctly |
|
|
|
00:19:44.410 --> 00:19:44.980 |
|
classified. |
|
|
|
00:19:47.520 --> 00:19:50.140 |
|
Did that follow that at all? |
|
|
|
00:19:52.740 --> 00:19:53.160 |
|
OK. |
|
|
|
00:19:56.510 --> 00:19:58.540 |
|
So. |
|
|
|
00:20:00.950 --> 00:20:02.700 |
|
Alright, so in the separable case, |
|
|
|
00:20:02.700 --> 00:20:04.360 |
|
meaning that you can perfectly classify |
|
|
|
00:20:04.360 --> 00:20:05.700 |
|
your data with a linear model. |
|
|
|
00:20:06.580 --> 00:20:08.630 |
|
The prediction is simply the sign of |
|
|
|
00:20:08.630 --> 00:20:12.077 |
|
your linear model W transpose X + B so |
|
|
|
00:20:12.077 --> 00:20:15.780 |
|
and the labels here are one and -, 1. |
|
|
|
00:20:15.780 --> 00:20:17.295 |
|
You can see in like different cases, |
|
|
|
00:20:17.295 --> 00:20:19.054 |
|
sometimes people say binary problem, |
|
|
|
00:20:19.054 --> 00:20:21.045 |
|
the labels are zero or one and |
|
|
|
00:20:21.045 --> 00:20:23.022 |
|
sometimes they'll say it's -, 1 or one. |
|
|
|
00:20:23.022 --> 00:20:25.460 |
|
And it's mainly just chosen for the |
|
|
|
00:20:25.460 --> 00:20:26.712 |
|
simplicity of the math. |
|
|
|
00:20:26.712 --> 00:20:29.040 |
|
In this case it kind of makes it the |
|
|
|
00:20:29.040 --> 00:20:29.350 |
|
make. |
|
|
|
00:20:29.350 --> 00:20:31.080 |
|
It makes the math a lot simpler so I |
|
|
|
00:20:31.080 --> 00:20:33.794 |
|
don't have to say like F y = 0 then |
|
|
|
00:20:33.794 --> 00:20:36.030 |
|
this, if y = 1 then this other thing I |
|
|
|
00:20:36.030 --> 00:20:36.410 |
|
can just. |
|
|
|
00:20:36.490 --> 00:20:37.630 |
|
Y into the equation. |
|
|
|
00:20:39.400 --> 00:20:42.540 |
|
The optimization is I'm going to solve |
|
|
|
00:20:42.540 --> 00:20:45.960 |
|
for the West the weights that minimize |
|
|
|
00:20:45.960 --> 00:20:48.930 |
|
that the smallest weights that satisfy |
|
|
|
00:20:48.930 --> 00:20:49.850 |
|
this constraint. |
|
|
|
00:20:50.680 --> 00:20:53.650 |
|
That the margin is one for all |
|
|
|
00:20:53.650 --> 00:20:56.840 |
|
examples, so the model times the model |
|
|
|
00:20:56.840 --> 00:20:59.826 |
|
prediction times the label is at least |
|
|
|
00:20:59.826 --> 00:21:01.600 |
|
one for every training sample. |
|
|
|
00:21:06.580 --> 00:21:09.440 |
|
If the data is not linearly separable, |
|
|
|
00:21:09.440 --> 00:21:12.490 |
|
then I can just extend a little bit. |
|
|
|
00:21:13.190 --> 00:21:14.520 |
|
And I can say. |
|
|
|
00:21:15.780 --> 00:21:17.000 |
|
I don't know what that sound is. |
|
|
|
00:21:17.000 --> 00:21:19.350 |
|
It's really weird, OK? |
|
|
|
00:21:20.690 --> 00:21:23.210 |
|
And if the data is not linearly |
|
|
|
00:21:23.210 --> 00:21:24.230 |
|
separable. |
|
|
|
00:21:25.130 --> 00:21:26.900 |
|
Then I can say that I'm going to just |
|
|
|
00:21:26.900 --> 00:21:30.240 |
|
pay a penalty of C times, like how much |
|
|
|
00:21:30.240 --> 00:21:32.467 |
|
that data violates my margin. |
|
|
|
00:21:32.467 --> 00:21:35.405 |
|
So the if it has a margin of less than |
|
|
|
00:21:35.405 --> 00:21:39.533 |
|
one, then I pay C * 1 minus its margin. |
|
|
|
00:21:39.533 --> 00:21:42.222 |
|
So for example if it's right on the |
|
|
|
00:21:42.222 --> 00:21:44.280 |
|
boundary, then W transpose X + b is |
|
|
|
00:21:44.280 --> 00:21:47.665 |
|
equal to 0 and so I pay a penalty of C |
|
|
|
00:21:47.665 --> 00:21:49.380 |
|
* 1 if it's negative. |
|
|
|
00:21:49.380 --> 00:21:50.638 |
|
If it's on the wrong side of the |
|
|
|
00:21:50.638 --> 00:21:51.820 |
|
boundary, then I'd pay an even higher |
|
|
|
00:21:51.820 --> 00:21:53.456 |
|
penalty, and if it's on the right side |
|
|
|
00:21:53.456 --> 00:21:54.500 |
|
of the boundary, but. |
|
|
|
00:21:54.560 --> 00:21:56.800 |
|
But the margin is less than one, then I |
|
|
|
00:21:56.800 --> 00:21:57.810 |
|
pay a smaller penalty. |
|
|
|
00:22:00.520 --> 00:22:03.040 |
|
This is called the hinge loss, and I'll |
|
|
|
00:22:03.040 --> 00:22:04.050 |
|
show it here. |
|
|
|
00:22:04.050 --> 00:22:06.210 |
|
So in the hinge loss, if you're |
|
|
|
00:22:06.210 --> 00:22:08.130 |
|
confidently correct, there's zero |
|
|
|
00:22:08.130 --> 00:22:10.110 |
|
penalty if you have a margin of greater |
|
|
|
00:22:10.110 --> 00:22:12.080 |
|
than one in the case of an SVM. |
|
|
|
00:22:12.750 --> 00:22:14.820 |
|
But if you're not confidently correct |
|
|
|
00:22:14.820 --> 00:22:17.085 |
|
if they're unconfident or incorrect, |
|
|
|
00:22:17.085 --> 00:22:18.980 |
|
which means which is when you're on |
|
|
|
00:22:18.980 --> 00:22:20.640 |
|
this side of the decision boundary. |
|
|
|
00:22:21.300 --> 00:22:24.460 |
|
Then you pay a penalty and the penalty |
|
|
|
00:22:24.460 --> 00:22:26.070 |
|
just increases. |
|
|
|
00:22:27.410 --> 00:22:30.220 |
|
Proportionally to how far you are from |
|
|
|
00:22:30.220 --> 00:22:31.600 |
|
the margin of 1. |
|
|
|
00:22:33.010 --> 00:22:35.640 |
|
And say if you have, if you're just |
|
|
|
00:22:35.640 --> 00:22:37.350 |
|
unconfident way correct, you pay a |
|
|
|
00:22:37.350 --> 00:22:38.803 |
|
little penalty, if you're incorrect, |
|
|
|
00:22:38.803 --> 00:22:41.209 |
|
you pay a bigger penalty, and if you're |
|
|
|
00:22:41.210 --> 00:22:42.760 |
|
confidently incorrect, then you pay an |
|
|
|
00:22:42.760 --> 00:22:43.720 |
|
even bigger penalty. |
|
|
|
00:22:45.420 --> 00:22:48.050 |
|
And this is important because. |
|
|
|
00:22:48.780 --> 00:22:51.170 |
|
With this kind of loss, the confidently |
|
|
|
00:22:51.170 --> 00:22:54.450 |
|
correct examples don't make any they |
|
|
|
00:22:54.450 --> 00:22:56.090 |
|
don't change the decision. |
|
|
|
00:22:56.090 --> 00:22:58.350 |
|
So anything that incurs a loss means |
|
|
|
00:22:58.350 --> 00:23:00.000 |
|
that it's part of your thing that |
|
|
|
00:23:00.000 --> 00:23:01.420 |
|
you're minimizing and your objective |
|
|
|
00:23:01.420 --> 00:23:02.190 |
|
function. |
|
|
|
00:23:02.190 --> 00:23:04.070 |
|
But if it doesn't incur a loss, then |
|
|
|
00:23:04.070 --> 00:23:07.180 |
|
it's not changing your objective |
|
|
|
00:23:07.180 --> 00:23:09.710 |
|
evaluation, so it's not causing any |
|
|
|
00:23:09.710 --> 00:23:10.760 |
|
change to your decision. |
|
|
|
00:23:15.700 --> 00:23:18.486 |
|
So I also need to note that there's |
|
|
|
00:23:18.486 --> 00:23:20.373 |
|
like different ways of expressing the |
|
|
|
00:23:20.373 --> 00:23:20.839 |
|
same thing. |
|
|
|
00:23:20.839 --> 00:23:22.939 |
|
So here I express it in terms of this |
|
|
|
00:23:22.940 --> 00:23:23.840 |
|
hinge loss. |
|
|
|
00:23:23.840 --> 00:23:26.399 |
|
But you can also express it in terms of |
|
|
|
00:23:26.400 --> 00:23:28.490 |
|
what people call slack variables. |
|
|
|
00:23:28.490 --> 00:23:30.443 |
|
It's the exact same thing. |
|
|
|
00:23:30.443 --> 00:23:32.850 |
|
It's just that here this slack variable |
|
|
|
00:23:32.850 --> 00:23:35.220 |
|
is equal to 1 minus the margin. |
|
|
|
00:23:35.220 --> 00:23:37.270 |
|
This is like if I bring. |
|
|
|
00:23:39.220 --> 00:23:39.796 |
|
A. |
|
|
|
00:23:39.796 --> 00:23:42.610 |
|
Bring this over here and then bring |
|
|
|
00:23:42.610 --> 00:23:43.335 |
|
that over here. |
|
|
|
00:23:43.335 --> 00:23:45.030 |
|
Then this slack variable when you |
|
|
|
00:23:45.030 --> 00:23:47.030 |
|
minimize it will be equal to 1 minus |
|
|
|
00:23:47.030 --> 00:23:47.730 |
|
this margin. |
|
|
|
00:23:49.480 --> 00:23:51.330 |
|
So Slack variable is 1 minus the margin |
|
|
|
00:23:51.330 --> 00:23:52.740 |
|
and you pay the same penalty. |
|
|
|
00:23:52.740 --> 00:23:55.020 |
|
But if you're ever like reading about |
|
|
|
00:23:55.020 --> 00:23:57.230 |
|
SVMS and somebody says like slack |
|
|
|
00:23:57.230 --> 00:23:58.820 |
|
variable, then I just want you to know |
|
|
|
00:23:58.820 --> 00:23:59.350 |
|
what that means. |
|
|
|
00:24:00.260 --> 00:24:01.620 |
|
This means. |
|
|
|
00:24:01.620 --> 00:24:03.760 |
|
So for this example here, we would be |
|
|
|
00:24:03.760 --> 00:24:05.740 |
|
paying some penalty, some slack |
|
|
|
00:24:05.740 --> 00:24:08.010 |
|
penalty, or some hinge loss penalty |
|
|
|
00:24:08.010 --> 00:24:08.780 |
|
equivalently. |
|
|
|
00:24:10.520 --> 00:24:12.840 |
|
Here's an example of an SVM decision |
|
|
|
00:24:12.840 --> 00:24:15.510 |
|
boundary classifying between these red |
|
|
|
00:24:15.510 --> 00:24:17.390 |
|
Oreos and Blue X's. |
|
|
|
00:24:17.390 --> 00:24:19.270 |
|
This is from Andrews Esterman slides |
|
|
|
00:24:19.270 --> 00:24:20.530 |
|
from Oxford. |
|
|
|
00:24:22.710 --> 00:24:25.266 |
|
And here there's a soft margin, so |
|
|
|
00:24:25.266 --> 00:24:27.210 |
|
there's some penalty. |
|
|
|
00:24:27.210 --> 00:24:29.740 |
|
If you were to set this PC to Infinity, |
|
|
|
00:24:29.740 --> 00:24:32.280 |
|
it means that you are still requiring |
|
|
|
00:24:32.280 --> 00:24:34.820 |
|
that every example has a. |
|
|
|
00:24:35.960 --> 00:24:37.930 |
|
Is has a margin of 1. |
|
|
|
00:24:38.610 --> 00:24:40.310 |
|
Which that can be a problem if you have |
|
|
|
00:24:40.310 --> 00:24:41.930 |
|
this case, because then you won't be |
|
|
|
00:24:41.930 --> 00:24:43.360 |
|
able to optimize it because it's |
|
|
|
00:24:43.360 --> 00:24:44.000 |
|
impossible. |
|
|
|
00:24:45.030 --> 00:24:48.150 |
|
So if you set a small CC is 10, then |
|
|
|
00:24:48.150 --> 00:24:49.790 |
|
you pay a small penalty when things |
|
|
|
00:24:49.790 --> 00:24:50.495 |
|
violate the margin. |
|
|
|
00:24:50.495 --> 00:24:52.360 |
|
And in this case it finds the decision |
|
|
|
00:24:52.360 --> 00:24:54.180 |
|
boundary where it incorrectly |
|
|
|
00:24:54.180 --> 00:24:57.270 |
|
classifies this one example and you |
|
|
|
00:24:57.270 --> 00:25:00.473 |
|
have these four examples are within the |
|
|
|
00:25:00.473 --> 00:25:00.829 |
|
margin. |
|
|
|
00:25:00.830 --> 00:25:01.310 |
|
We're on it. |
|
|
|
00:25:05.750 --> 00:25:06.300 |
|
OK. |
|
|
|
00:25:06.300 --> 00:25:08.500 |
|
Any questions about that so far? |
|
|
|
00:25:09.590 --> 00:25:09.980 |
|
OK. |
|
|
|
00:25:11.890 --> 00:25:16.150 |
|
So I'm going to talk about the |
|
|
|
00:25:16.150 --> 00:25:18.270 |
|
objective functions a little bit more, |
|
|
|
00:25:18.270 --> 00:25:20.730 |
|
and to do that I'll introduce this |
|
|
|
00:25:20.730 --> 00:25:22.180 |
|
thing called the Representer theorem. |
|
|
|
00:25:22.940 --> 00:25:25.500 |
|
So the Representer theorem basically |
|
|
|
00:25:25.500 --> 00:25:29.100 |
|
says that if you have some model, some |
|
|
|
00:25:29.100 --> 00:25:31.240 |
|
linear model, that's W transpose X. |
|
|
|
00:25:32.240 --> 00:25:37.240 |
|
Then the optimal West in many cases can |
|
|
|
00:25:37.240 --> 00:25:43.100 |
|
be expressed as a of some weight for |
|
|
|
00:25:43.100 --> 00:25:46.210 |
|
each example and the example features. |
|
|
|
00:25:46.970 --> 00:25:49.240 |
|
And the label of the features or the |
|
|
|
00:25:49.240 --> 00:25:50.450 |
|
label of the data point? |
|
|
|
00:25:52.080 --> 00:25:55.300 |
|
So the optimal weight vector is just a |
|
|
|
00:25:55.300 --> 00:25:58.160 |
|
weighted average of the input training |
|
|
|
00:25:58.160 --> 00:25:59.270 |
|
example features. |
|
|
|
00:26:02.260 --> 00:26:03.940 |
|
And there's certain like caveats and |
|
|
|
00:26:03.940 --> 00:26:06.760 |
|
conditions, but this is true for L2 |
|
|
|
00:26:06.760 --> 00:26:10.550 |
|
logistic regression or SVM for example. |
|
|
|
00:26:13.120 --> 00:26:17.500 |
|
And for SVMS these alphas are zeros for |
|
|
|
00:26:17.500 --> 00:26:20.066 |
|
all the non support vectors because the |
|
|
|
00:26:20.066 --> 00:26:22.080 |
|
support vectors influence the decision. |
|
|
|
00:26:23.420 --> 00:26:24.800 |
|
So it's actually depends on a very |
|
|
|
00:26:24.800 --> 00:26:26.390 |
|
small number of training examples. |
|
|
|
00:26:28.710 --> 00:26:30.690 |
|
So I'm not going to go deep into the |
|
|
|
00:26:30.690 --> 00:26:33.000 |
|
math and I don't expect anybody to be |
|
|
|
00:26:33.000 --> 00:26:34.880 |
|
able to derive the dual or anything |
|
|
|
00:26:34.880 --> 00:26:38.127 |
|
like that, but I just want to express |
|
|
|
00:26:38.127 --> 00:26:39.940 |
|
express these objectives and different |
|
|
|
00:26:39.940 --> 00:26:41.240 |
|
ways of looking at the problem. |
|
|
|
00:26:42.100 --> 00:26:44.433 |
|
So in terms of prediction already I |
|
|
|
00:26:44.433 --> 00:26:46.030 |
|
already gave you this formulation |
|
|
|
00:26:46.030 --> 00:26:47.833 |
|
that's called the primal where the |
|
|
|
00:26:47.833 --> 00:26:49.550 |
|
where you're optimizing in terms of the |
|
|
|
00:26:49.550 --> 00:26:50.310 |
|
feature weights. |
|
|
|
00:26:51.570 --> 00:26:53.550 |
|
And then you can also represent it in |
|
|
|
00:26:53.550 --> 00:26:56.020 |
|
terms of you can represent, whoops, the |
|
|
|
00:26:56.020 --> 00:26:56.700 |
|
dual. |
|
|
|
00:26:57.700 --> 00:26:58.280 |
|
Where to go? |
|
|
|
00:26:59.380 --> 00:27:01.400 |
|
Alright, you can also represent it in |
|
|
|
00:27:01.400 --> 00:27:03.595 |
|
what's called a dual, where instead of |
|
|
|
00:27:03.595 --> 00:27:05.330 |
|
optimizing over feature weights, you're |
|
|
|
00:27:05.330 --> 00:27:06.920 |
|
optimizing over the weights of each |
|
|
|
00:27:06.920 --> 00:27:07.520 |
|
example. |
|
|
|
00:27:08.160 --> 00:27:10.470 |
|
Where again sum of those weights of the |
|
|
|
00:27:10.470 --> 00:27:12.720 |
|
examples gives you your weight vector. |
|
|
|
00:27:13.560 --> 00:27:16.256 |
|
And remember that this weights are the |
|
|
|
00:27:16.256 --> 00:27:19.560 |
|
sum of alpha YX and when I plug that in |
|
|
|
00:27:19.560 --> 00:27:22.410 |
|
here then I see in the dual that my |
|
|
|
00:27:22.410 --> 00:27:25.914 |
|
prediction is the sum of alpha Y and |
|
|
|
00:27:25.914 --> 00:27:28.395 |
|
the dot product of each training |
|
|
|
00:27:28.395 --> 00:27:30.910 |
|
example with the example that I'm |
|
|
|
00:27:30.910 --> 00:27:31.680 |
|
predicting for. |
|
|
|
00:27:33.230 --> 00:27:33.980 |
|
So this. |
|
|
|
00:27:33.980 --> 00:27:36.540 |
|
So here there's like a, it's a. |
|
|
|
00:27:36.540 --> 00:27:39.255 |
|
It's an average of the similarities of |
|
|
|
00:27:39.255 --> 00:27:43.550 |
|
the training examples with the features |
|
|
|
00:27:43.550 --> 00:27:45.330 |
|
that I'm making a prediction for. |
|
|
|
00:27:46.110 --> 00:27:47.730 |
|
Where the similarity is defined by A |
|
|
|
00:27:47.730 --> 00:27:49.020 |
|
dot product in this case. |
|
|
|
00:27:50.820 --> 00:27:53.831 |
|
Dot product is the sum of the elements |
|
|
|
00:27:53.831 --> 00:27:56.571 |
|
squared or the I mean squared but the |
|
|
|
00:27:56.571 --> 00:27:58.820 |
|
sum of the product of the elements. |
|
|
|
00:28:01.790 --> 00:28:05.558 |
|
And this is just plugging it into the |
|
|
|
00:28:05.558 --> 00:28:06.950 |
|
into the. |
|
|
|
00:28:06.950 --> 00:28:08.750 |
|
If I plug everything in and then write |
|
|
|
00:28:08.750 --> 00:28:10.350 |
|
the objective of the dual it comes out |
|
|
|
00:28:10.350 --> 00:28:11.030 |
|
to this. |
|
|
|
00:28:13.950 --> 00:28:17.410 |
|
For an SVM, alpha sparse, which means |
|
|
|
00:28:17.410 --> 00:28:19.080 |
|
most of the values are zero. |
|
|
|
00:28:19.080 --> 00:28:22.115 |
|
So the SVM only depends on these few |
|
|
|
00:28:22.115 --> 00:28:25.920 |
|
examples, and so it's only nonzero for |
|
|
|
00:28:25.920 --> 00:28:27.640 |
|
the support vectors, the examples that |
|
|
|
00:28:27.640 --> 00:28:28.560 |
|
are within the margin. |
|
|
|
00:28:35.550 --> 00:28:37.460 |
|
So the reason that the dual will be |
|
|
|
00:28:37.460 --> 00:28:40.280 |
|
helpful is that it. |
|
|
|
00:28:41.900 --> 00:28:45.020 |
|
Is that it allows us to deal with a |
|
|
|
00:28:45.020 --> 00:28:45.930 |
|
nonlinear case. |
|
|
|
00:28:45.930 --> 00:28:48.422 |
|
So in the top example, we might say a |
|
|
|
00:28:48.422 --> 00:28:50.180 |
|
linear classifier is OK, it only gets |
|
|
|
00:28:50.180 --> 00:28:51.550 |
|
one example wrong. |
|
|
|
00:28:51.550 --> 00:28:53.179 |
|
I can live with that. |
|
|
|
00:28:53.180 --> 00:28:55.437 |
|
But in the bottom case, a linear |
|
|
|
00:28:55.437 --> 00:28:57.860 |
|
example seems like a really bad choice, |
|
|
|
00:28:57.860 --> 00:28:58.210 |
|
right? |
|
|
|
00:28:58.210 --> 00:29:00.995 |
|
Like it's obviously nonlinear and a |
|
|
|
00:29:00.995 --> 00:29:02.010 |
|
linear classifier is going to get |
|
|
|
00:29:02.010 --> 00:29:02.780 |
|
really high error. |
|
|
|
00:29:03.530 --> 00:29:06.790 |
|
So what is some way that I could try |
|
|
|
00:29:06.790 --> 00:29:08.630 |
|
to, let's say I still want to stick |
|
|
|
00:29:08.630 --> 00:29:10.220 |
|
with a linear classifier, what's |
|
|
|
00:29:10.220 --> 00:29:12.750 |
|
something that I can do to this do in |
|
|
|
00:29:12.750 --> 00:29:16.375 |
|
this case to improve the ability of the |
|
|
|
00:29:16.375 --> 00:29:16.950 |
|
linear classifier? |
|
|
|
00:29:19.410 --> 00:29:19.860 |
|
Yeah. |
|
|
|
00:29:22.680 --> 00:29:24.680 |
|
So I can like I can change the |
|
|
|
00:29:24.680 --> 00:29:26.880 |
|
coordinate system or change the |
|
|
|
00:29:26.880 --> 00:29:28.740 |
|
features in some way so that they |
|
|
|
00:29:28.740 --> 00:29:30.140 |
|
become linearly separable. |
|
|
|
00:29:30.930 --> 00:29:32.160 |
|
And the new feature space. |
|
|
|
00:29:32.230 --> 00:29:34.440 |
|
Can we reject it in different |
|
|
|
00:29:34.440 --> 00:29:34.890 |
|
dimensions? |
|
|
|
00:29:37.530 --> 00:29:38.160 |
|
Right, yeah. |
|
|
|
00:29:38.160 --> 00:29:40.230 |
|
And we can also project it into a |
|
|
|
00:29:40.230 --> 00:29:41.810 |
|
higher dimensional space, for example, |
|
|
|
00:29:41.810 --> 00:29:43.300 |
|
where it is linearly separable. |
|
|
|
00:29:44.200 --> 00:29:45.666 |
|
Exactly those are the two. |
|
|
|
00:29:45.666 --> 00:29:47.950 |
|
I think there's either 2 valid answers |
|
|
|
00:29:47.950 --> 00:29:48.750 |
|
that I can think of. |
|
|
|
00:29:49.900 --> 00:29:52.720 |
|
So for example, if we were to use polar |
|
|
|
00:29:52.720 --> 00:29:56.040 |
|
coordinates, then we could represent |
|
|
|
00:29:56.040 --> 00:29:59.273 |
|
instead of the like position on the X&Y |
|
|
|
00:29:59.273 --> 00:30:01.190 |
|
axis or X1 and X2 axis. |
|
|
|
00:30:01.910 --> 00:30:03.770 |
|
We could represent the distance and |
|
|
|
00:30:03.770 --> 00:30:05.550 |
|
angle of each point from the center. |
|
|
|
00:30:06.220 --> 00:30:08.980 |
|
And then here's that new coordinate |
|
|
|
00:30:08.980 --> 00:30:09.350 |
|
space. |
|
|
|
00:30:09.350 --> 00:30:11.300 |
|
And then this is a really easy like |
|
|
|
00:30:11.300 --> 00:30:12.120 |
|
linear decision. |
|
|
|
00:30:12.860 --> 00:30:14.440 |
|
So that's one way to solve it. |
|
|
|
00:30:16.250 --> 00:30:18.520 |
|
Another way is that we can map the data |
|
|
|
00:30:18.520 --> 00:30:21.520 |
|
into another higher dimensional space S |
|
|
|
00:30:21.520 --> 00:30:23.550 |
|
if I instead represent instead of |
|
|
|
00:30:23.550 --> 00:30:25.920 |
|
representing X1 and X2 directly. |
|
|
|
00:30:25.920 --> 00:30:30.209 |
|
If I represent X1 squared and X2 |
|
|
|
00:30:30.209 --> 00:30:33.620 |
|
squared and the X1 times X2. |
|
|
|
00:30:34.450 --> 00:30:35.960 |
|
Sqrt 2. |
|
|
|
00:30:36.180 --> 00:30:38.580 |
|
Come it's helpful in the in some math |
|
|
|
00:30:38.580 --> 00:30:39.240 |
|
later. |
|
|
|
00:30:39.240 --> 00:30:41.830 |
|
If I represent these three coordinates |
|
|
|
00:30:41.830 --> 00:30:44.680 |
|
instead, then it gets mapped as is |
|
|
|
00:30:44.680 --> 00:30:47.545 |
|
shown in this 3D plot, and now there's |
|
|
|
00:30:47.545 --> 00:30:51.020 |
|
a linear like a plane boundary that can |
|
|
|
00:30:51.020 --> 00:30:54.040 |
|
separate the circles from the |
|
|
|
00:30:54.040 --> 00:30:54.630 |
|
triangles. |
|
|
|
00:30:55.490 --> 00:30:57.110 |
|
So this also works right? |
|
|
|
00:30:57.110 --> 00:30:57.920 |
|
Two ways to do it. |
|
|
|
00:30:57.920 --> 00:31:00.270 |
|
I can change the features or map into a |
|
|
|
00:31:00.270 --> 00:31:01.380 |
|
higher dimensional space. |
|
|
|
00:31:04.820 --> 00:31:07.180 |
|
So if I wanted to so I can write this |
|
|
|
00:31:07.180 --> 00:31:09.740 |
|
as I have some kind of transformation |
|
|
|
00:31:09.740 --> 00:31:12.190 |
|
on my input features and then given |
|
|
|
00:31:12.190 --> 00:31:13.730 |
|
that transformation I then have a |
|
|
|
00:31:13.730 --> 00:31:16.510 |
|
linear model and I can solve that using |
|
|
|
00:31:16.510 --> 00:31:17.860 |
|
an SVM if I want. |
|
|
|
00:31:24.020 --> 00:31:27.569 |
|
So if I'm representing this in the |
|
|
|
00:31:27.570 --> 00:31:30.130 |
|
directly in the primal, then I can say |
|
|
|
00:31:30.130 --> 00:31:33.090 |
|
that I just map my original features to |
|
|
|
00:31:33.090 --> 00:31:34.970 |
|
my new features through this fee. |
|
|
|
00:31:34.970 --> 00:31:37.120 |
|
Just some feature function. |
|
|
|
00:31:37.980 --> 00:31:40.220 |
|
And then I solve for my weights in the |
|
|
|
00:31:40.220 --> 00:31:41.030 |
|
new space. |
|
|
|
00:31:42.030 --> 00:31:43.540 |
|
Sometimes though, in order to make the |
|
|
|
00:31:43.540 --> 00:31:45.225 |
|
data linearly separable you might have |
|
|
|
00:31:45.225 --> 00:31:47.050 |
|
to map into a very high dimensional |
|
|
|
00:31:47.050 --> 00:31:47.480 |
|
space. |
|
|
|
00:31:47.480 --> 00:31:50.390 |
|
So here like doing this trick where I |
|
|
|
00:31:50.390 --> 00:31:53.370 |
|
look at the squares and then the |
|
|
|
00:31:53.370 --> 00:31:55.390 |
|
product of the individual variables |
|
|
|
00:31:55.390 --> 00:31:57.510 |
|
only went from 2 to 3 dimensions. |
|
|
|
00:31:57.510 --> 00:31:59.162 |
|
But if I had started with 1000 |
|
|
|
00:31:59.162 --> 00:32:01.510 |
|
dimensions and I was like looking at |
|
|
|
00:32:01.510 --> 00:32:03.666 |
|
all products of pairs of variables, |
|
|
|
00:32:03.666 --> 00:32:05.292 |
|
this would become very high |
|
|
|
00:32:05.292 --> 00:32:05.680 |
|
dimensional. |
|
|
|
00:32:07.400 --> 00:32:08.880 |
|
So I might want to avoid that. |
|
|
|
00:32:10.320 --> 00:32:12.050 |
|
So we can use the dual and I'm not |
|
|
|
00:32:12.050 --> 00:32:13.690 |
|
going to step through the equations, |
|
|
|
00:32:13.690 --> 00:32:16.199 |
|
but it's just showing that in the dual, |
|
|
|
00:32:16.200 --> 00:32:18.750 |
|
since we're before you had a decision |
|
|
|
00:32:18.750 --> 00:32:20.850 |
|
in terms of a dot product of original |
|
|
|
00:32:20.850 --> 00:32:22.960 |
|
features, now it's a dot product of the |
|
|
|
00:32:22.960 --> 00:32:24.060 |
|
transform features. |
|
|
|
00:32:24.680 --> 00:32:26.280 |
|
So it's just the transformed features |
|
|
|
00:32:26.280 --> 00:32:28.180 |
|
transpose times the other transform |
|
|
|
00:32:28.180 --> 00:32:28.650 |
|
features. |
|
|
|
00:32:32.240 --> 00:32:35.300 |
|
And sometimes we don't even need to |
|
|
|
00:32:35.300 --> 00:32:37.300 |
|
compute the transformed features. |
|
|
|
00:32:37.300 --> 00:32:38.970 |
|
All we really need at the end of the |
|
|
|
00:32:38.970 --> 00:32:41.022 |
|
day is this kernel function. |
|
|
|
00:32:41.022 --> 00:32:43.410 |
|
The kernel is a similarity function. |
|
|
|
00:32:43.410 --> 00:32:45.192 |
|
It's a certain kind of similarity |
|
|
|
00:32:45.192 --> 00:32:48.860 |
|
function that defines how similar to |
|
|
|
00:32:48.860 --> 00:32:49.790 |
|
feature vectors are. |
|
|
|
00:32:50.510 --> 00:32:52.710 |
|
So I could compute it explicitly. |
|
|
|
00:32:53.920 --> 00:32:56.120 |
|
By transforming the features and taking |
|
|
|
00:32:56.120 --> 00:32:57.740 |
|
their dot product and then I could |
|
|
|
00:32:57.740 --> 00:32:59.560 |
|
store this kernel value for all my |
|
|
|
00:32:59.560 --> 00:33:01.655 |
|
pairs of features in the training set, |
|
|
|
00:33:01.655 --> 00:33:02.900 |
|
for example, and then do my |
|
|
|
00:33:02.900 --> 00:33:03.680 |
|
optimization. |
|
|
|
00:33:04.330 --> 00:33:05.980 |
|
I don't necessarily need to compute it |
|
|
|
00:33:05.980 --> 00:33:08.142 |
|
every time, and sometimes I don't need |
|
|
|
00:33:08.142 --> 00:33:09.740 |
|
to compute it as at all. |
|
|
|
00:33:11.500 --> 00:33:12.930 |
|
An example where I don't need to |
|
|
|
00:33:12.930 --> 00:33:15.150 |
|
compute it is in this case where I was |
|
|
|
00:33:15.150 --> 00:33:17.230 |
|
looking at the square of the individual |
|
|
|
00:33:17.230 --> 00:33:17.970 |
|
variables. |
|
|
|
00:33:18.610 --> 00:33:20.750 |
|
And the product of pairs of variables. |
|
|
|
00:33:22.140 --> 00:33:25.190 |
|
You can show that if you like, do this |
|
|
|
00:33:25.190 --> 00:33:27.920 |
|
multiplication of these two different |
|
|
|
00:33:27.920 --> 00:33:29.830 |
|
feature vectors X&Z. |
|
|
|
00:33:31.090 --> 00:33:32.823 |
|
Then and you expand it. |
|
|
|
00:33:32.823 --> 00:33:34.970 |
|
Then you can see that it actually ends |
|
|
|
00:33:34.970 --> 00:33:39.575 |
|
up being that the product of this Phi |
|
|
|
00:33:39.575 --> 00:33:42.410 |
|
of X times Phi of Z. |
|
|
|
00:33:43.260 --> 00:33:46.422 |
|
Is equal to the square of the dot |
|
|
|
00:33:46.422 --> 00:33:46.780 |
|
product. |
|
|
|
00:33:46.780 --> 00:33:49.473 |
|
So you can get the same benefit just by |
|
|
|
00:33:49.473 --> 00:33:50.837 |
|
squaring the dot product. |
|
|
|
00:33:50.837 --> 00:33:53.150 |
|
And you can compute the similarity just |
|
|
|
00:33:53.150 --> 00:33:55.440 |
|
by squaring the dot product instead of |
|
|
|
00:33:55.440 --> 00:33:56.650 |
|
needing the map into the higher |
|
|
|
00:33:56.650 --> 00:33:58.660 |
|
dimensional space and then taking the |
|
|
|
00:33:58.660 --> 00:33:59.230 |
|
dot product. |
|
|
|
00:34:00.400 --> 00:34:02.120 |
|
So if you had like a very high |
|
|
|
00:34:02.120 --> 00:34:03.540 |
|
dimensional feature, this would save a |
|
|
|
00:34:03.540 --> 00:34:04.230 |
|
lot of time. |
|
|
|
00:34:04.230 --> 00:34:07.340 |
|
You wouldn't need to compute a million |
|
|
|
00:34:07.340 --> 00:34:10.910 |
|
dimensional upper upper D feature. |
|
|
|
00:34:13.930 --> 00:34:15.680 |
|
And yeah. |
|
|
|
00:34:16.550 --> 00:34:18.310 |
|
So one thing to note though, is that |
|
|
|
00:34:18.310 --> 00:34:19.840 |
|
because you're learning in terms of the |
|
|
|
00:34:19.840 --> 00:34:22.950 |
|
distance of pairs of examples, the |
|
|
|
00:34:22.950 --> 00:34:24.760 |
|
optimization tends to be pretty slow |
|
|
|
00:34:24.760 --> 00:34:26.389 |
|
for kernel methods, at least in the |
|
|
|
00:34:26.390 --> 00:34:27.730 |
|
traditional kernel methods. |
|
|
|
00:34:28.440 --> 00:34:30.520 |
|
There's the algorithm that Austria is a |
|
|
|
00:34:30.520 --> 00:34:32.710 |
|
lot faster for kernels, although I'm |
|
|
|
00:34:32.710 --> 00:34:35.050 |
|
not going to go into depth for its |
|
|
|
00:34:35.050 --> 00:34:35.900 |
|
kernelized version. |
|
|
|
00:34:35.900 --> 00:34:36.140 |
|
Yep. |
|
|
|
00:34:39.220 --> 00:34:40.700 |
|
Gives us a vector. |
|
|
|
00:34:42.920 --> 00:34:45.130 |
|
X transpose times Z. |
|
|
|
00:34:46.120 --> 00:34:49.250 |
|
Z This one that gives us a scalar |
|
|
|
00:34:49.250 --> 00:34:51.760 |
|
because and Z are the same length, |
|
|
|
00:34:51.760 --> 00:34:53.000 |
|
they're just two different feature |
|
|
|
00:34:53.000 --> 00:34:53.570 |
|
vectors. |
|
|
|
00:34:54.580 --> 00:34:57.028 |
|
And so they're both like say north by |
|
|
|
00:34:57.028 --> 00:34:57.314 |
|
one. |
|
|
|
00:34:57.314 --> 00:34:59.430 |
|
So then I have a one by North Times |
|
|
|
00:34:59.430 --> 00:35:02.490 |
|
north by one gives me a 1 by 1. |
|
|
|
00:35:04.390 --> 00:35:05.942 |
|
Yeah, so it's a dot product. |
|
|
|
00:35:05.942 --> 00:35:08.740 |
|
So that dot product of two vectors |
|
|
|
00:35:08.740 --> 00:35:10.490 |
|
gives you just a single value. |
|
|
|
00:35:14.290 --> 00:35:16.340 |
|
So there's various kinds of kernels |
|
|
|
00:35:16.340 --> 00:35:17.260 |
|
that people use. |
|
|
|
00:35:17.260 --> 00:35:18.400 |
|
Polynomial. |
|
|
|
00:35:19.430 --> 00:35:23.005 |
|
The one we talked about Gaussian, which |
|
|
|
00:35:23.005 --> 00:35:25.610 |
|
is where you say that the similarity is |
|
|
|
00:35:25.610 --> 00:35:28.630 |
|
based on how the squared distance |
|
|
|
00:35:28.630 --> 00:35:30.060 |
|
between two feature vectors. |
|
|
|
00:35:31.670 --> 00:35:32.360 |
|
And. |
|
|
|
00:35:33.050 --> 00:35:34.730 |
|
And they can all just be used in the |
|
|
|
00:35:34.730 --> 00:35:37.440 |
|
same way by computing the kernel value. |
|
|
|
00:35:37.440 --> 00:35:39.100 |
|
In some cases you might compute |
|
|
|
00:35:39.100 --> 00:35:40.700 |
|
explicitly, like for the Gaussian |
|
|
|
00:35:40.700 --> 00:35:42.726 |
|
kernel and other places, and other |
|
|
|
00:35:42.726 --> 00:35:44.550 |
|
cases there's a shortcut for the |
|
|
|
00:35:44.550 --> 00:35:45.170 |
|
polynomial. |
|
|
|
00:35:46.800 --> 00:35:49.010 |
|
But you just plug in your kernel |
|
|
|
00:35:49.010 --> 00:35:50.190 |
|
function and then you can do this |
|
|
|
00:35:50.190 --> 00:35:51.040 |
|
optimization. |
|
|
|
00:35:52.850 --> 00:35:54.760 |
|
So I'm going to talk about optimization |
|
|
|
00:35:54.760 --> 00:35:56.800 |
|
a little bit later, so I just want to |
|
|
|
00:35:56.800 --> 00:35:58.430 |
|
show a couple of examples of how the |
|
|
|
00:35:58.430 --> 00:36:00.410 |
|
decision boundary can be affected by |
|
|
|
00:36:00.410 --> 00:36:02.090 |
|
some of the SVM parameters. |
|
|
|
00:36:02.790 --> 00:36:05.910 |
|
So one of the parameters is CC is like. |
|
|
|
00:36:05.910 --> 00:36:07.660 |
|
How important is it to make sure that |
|
|
|
00:36:07.660 --> 00:36:10.625 |
|
every example is like outside the |
|
|
|
00:36:10.625 --> 00:36:11.779 |
|
margin or on the margin? |
|
|
|
00:36:12.530 --> 00:36:14.950 |
|
If it's Infinity, then you're forcing |
|
|
|
00:36:14.950 --> 00:36:16.020 |
|
a, correct? |
|
|
|
00:36:16.020 --> 00:36:18.630 |
|
You're forcing that everything has a |
|
|
|
00:36:18.630 --> 00:36:20.030 |
|
margin of at least one. |
|
|
|
00:36:20.810 --> 00:36:22.750 |
|
And so I wouldn't even be able to solve |
|
|
|
00:36:22.750 --> 00:36:24.610 |
|
it if I were doing a linear classifier. |
|
|
|
00:36:24.610 --> 00:36:27.556 |
|
But in this case it's a RBF classifier |
|
|
|
00:36:27.556 --> 00:36:30.060 |
|
RBF kernel, which means that the |
|
|
|
00:36:30.060 --> 00:36:31.110 |
|
distance is defined. |
|
|
|
00:36:31.110 --> 00:36:32.920 |
|
The distance between examples is |
|
|
|
00:36:32.920 --> 00:36:35.510 |
|
defined as like this squared distance |
|
|
|
00:36:35.510 --> 00:36:37.160 |
|
divided by some Sigma. |
|
|
|
00:36:38.040 --> 00:36:40.390 |
|
Sigma squared, so in this case I can |
|
|
|
00:36:40.390 --> 00:36:41.990 |
|
linearly separate it with the RBF |
|
|
|
00:36:41.990 --> 00:36:43.490 |
|
kernel and I get this function. |
|
|
|
00:36:44.140 --> 00:36:48.490 |
|
If I reduce C then I start to get I get |
|
|
|
00:36:48.490 --> 00:36:51.300 |
|
some an additional sample that is |
|
|
|
00:36:51.300 --> 00:36:53.880 |
|
within the margin over here, but on |
|
|
|
00:36:53.880 --> 00:36:55.885 |
|
average examples are further from the |
|
|
|
00:36:55.885 --> 00:36:57.260 |
|
margin because I've relaxed my |
|
|
|
00:36:57.260 --> 00:36:57.970 |
|
constraints. |
|
|
|
00:36:57.970 --> 00:36:59.840 |
|
So sometimes you can get a better |
|
|
|
00:36:59.840 --> 00:37:02.820 |
|
classifier by you don't always want to |
|
|
|
00:37:02.820 --> 00:37:05.140 |
|
have C equal to Infinity or force that |
|
|
|
00:37:05.140 --> 00:37:06.970 |
|
everything is outside the margin, even |
|
|
|
00:37:06.970 --> 00:37:07.860 |
|
if it's possible. |
|
|
|
00:37:09.610 --> 00:37:10.715 |
|
Often you have to optimize. |
|
|
|
00:37:10.715 --> 00:37:12.700 |
|
You have to do like some kind of cross |
|
|
|
00:37:12.700 --> 00:37:14.710 |
|
validation to choose C and that's one |
|
|
|
00:37:14.710 --> 00:37:16.330 |
|
of the things that I always hated about |
|
|
|
00:37:16.330 --> 00:37:18.571 |
|
SVMS because they can take a while to |
|
|
|
00:37:18.571 --> 00:37:19.770 |
|
optimize and you have to do that |
|
|
|
00:37:19.770 --> 00:37:20.130 |
|
search. |
|
|
|
00:37:22.990 --> 00:37:27.090 |
|
So the if you relax, even more so now |
|
|
|
00:37:27.090 --> 00:37:28.215 |
|
there's like a very weak penalty. |
|
|
|
00:37:28.215 --> 00:37:29.860 |
|
So now you have lots of things within |
|
|
|
00:37:29.860 --> 00:37:30.390 |
|
the margin. |
|
|
|
00:37:32.280 --> 00:37:34.499 |
|
Then the other parameter, your kernel |
|
|
|
00:37:34.500 --> 00:37:37.570 |
|
sometimes has parameters, so the RBF |
|
|
|
00:37:37.570 --> 00:37:40.630 |
|
kernel is how sharp your distance |
|
|
|
00:37:40.630 --> 00:37:41.690 |
|
function is. |
|
|
|
00:37:41.690 --> 00:37:43.190 |
|
So if Sigma is. |
|
|
|
00:37:43.470 --> 00:37:47.625 |
|
A Sigma is 1 then whatever, it's one. |
|
|
|
00:37:47.625 --> 00:37:50.240 |
|
If Sigma Sigma goes closer to zero |
|
|
|
00:37:50.240 --> 00:37:53.440 |
|
though, your RBF kernel becomes more a |
|
|
|
00:37:53.440 --> 00:37:55.165 |
|
nearest neighbor classifier, because if |
|
|
|
00:37:55.165 --> 00:37:56.739 |
|
Sigma is really close to 0. |
|
|
|
00:37:57.700 --> 00:37:59.730 |
|
Then it means that an example that |
|
|
|
00:37:59.730 --> 00:38:01.760 |
|
you're really close to. |
|
|
|
00:38:01.760 --> 00:38:03.857 |
|
Only if you're super close to an |
|
|
|
00:38:03.857 --> 00:38:06.459 |
|
example will it have a will it have a |
|
|
|
00:38:06.460 --> 00:38:08.970 |
|
high similarity, and examples that are |
|
|
|
00:38:08.970 --> 00:38:11.035 |
|
further away will have much lower |
|
|
|
00:38:11.035 --> 00:38:11.540 |
|
similarity. |
|
|
|
00:38:12.360 --> 00:38:14.080 |
|
So you can see that with Sigma equals |
|
|
|
00:38:14.080 --> 00:38:16.010 |
|
one you just fit like these circular |
|
|
|
00:38:16.010 --> 00:38:17.090 |
|
decision functions. |
|
|
|
00:38:17.820 --> 00:38:19.770 |
|
As Sigma gets smaller, it starts to |
|
|
|
00:38:19.770 --> 00:38:21.680 |
|
become like a little bit more wobbly. |
|
|
|
00:38:22.440 --> 00:38:24.050 |
|
This is the this is the decision |
|
|
|
00:38:24.050 --> 00:38:25.960 |
|
boundary, this solid line, in case |
|
|
|
00:38:25.960 --> 00:38:27.630 |
|
that's not clear, with the green on one |
|
|
|
00:38:27.630 --> 00:38:29.310 |
|
side and the yellow on the other side. |
|
|
|
00:38:30.140 --> 00:38:32.459 |
|
And then as it gets smaller, then it |
|
|
|
00:38:32.460 --> 00:38:33.800 |
|
starts to become like a nearest |
|
|
|
00:38:33.800 --> 00:38:34.670 |
|
neighbor classifier. |
|
|
|
00:38:34.670 --> 00:38:36.370 |
|
So almost everything is a support |
|
|
|
00:38:36.370 --> 00:38:38.140 |
|
vector except for the very easiest |
|
|
|
00:38:38.140 --> 00:38:40.429 |
|
points on the interior here and the |
|
|
|
00:38:40.430 --> 00:38:41.110 |
|
decision boundary. |
|
|
|
00:38:41.110 --> 00:38:43.050 |
|
You can start to become really |
|
|
|
00:38:43.050 --> 00:38:45.935 |
|
arbitrarily complicated, just like just |
|
|
|
00:38:45.935 --> 00:38:47.329 |
|
like a nearest neighbor. |
|
|
|
00:38:48.570 --> 00:38:49.150 |
|
Question. |
|
|
|
00:38:50.520 --> 00:38:51.895 |
|
What? |
|
|
|
00:38:51.895 --> 00:38:54.120 |
|
So yeah, good question. |
|
|
|
00:38:54.120 --> 00:38:55.320 |
|
So Sigma is in. |
|
|
|
00:38:55.320 --> 00:38:57.750 |
|
It's from this equation here where I |
|
|
|
00:38:57.750 --> 00:39:00.720 |
|
say that the similarity of two examples |
|
|
|
00:39:00.720 --> 00:39:04.350 |
|
is their distance, their L2 distance |
|
|
|
00:39:04.350 --> 00:39:06.140 |
|
squared divided by two Sigma. |
|
|
|
00:39:06.980 --> 00:39:07.473 |
|
Squared. |
|
|
|
00:39:07.473 --> 00:39:09.930 |
|
So if Sigma is really high, then it |
|
|
|
00:39:09.930 --> 00:39:11.500 |
|
means that my similarity falls off |
|
|
|
00:39:11.500 --> 00:39:14.490 |
|
slowly as two examples get further away |
|
|
|
00:39:14.490 --> 00:39:15.620 |
|
in feature space. |
|
|
|
00:39:15.620 --> 00:39:18.050 |
|
And if it's really small then the |
|
|
|
00:39:18.050 --> 00:39:20.210 |
|
similarity drops off really quickly. |
|
|
|
00:39:20.210 --> 00:39:22.120 |
|
So if it's like close to 0. |
|
|
|
00:39:22.970 --> 00:39:25.380 |
|
Then the closest example will just be |
|
|
|
00:39:25.380 --> 00:39:27.390 |
|
way, way way closer than any of the |
|
|
|
00:39:27.390 --> 00:39:28.190 |
|
other examples. |
|
|
|
00:39:29.690 --> 00:39:31.070 |
|
According to the similarity measure. |
|
|
|
00:39:32.440 --> 00:39:32.980 |
|
Yeah. |
|
|
|
00:39:33.240 --> 00:39:35.970 |
|
The previous example we are discussing |
|
|
|
00:39:35.970 --> 00:39:37.700 |
|
projecting features to higher |
|
|
|
00:39:37.700 --> 00:39:38.580 |
|
dimensions, right? |
|
|
|
00:39:38.580 --> 00:39:41.730 |
|
Yeah, so how can we be sure this is the |
|
|
|
00:39:41.730 --> 00:39:43.650 |
|
minimum dimension we required to |
|
|
|
00:39:43.650 --> 00:39:44.380 |
|
classify that? |
|
|
|
00:39:45.130 --> 00:39:46.810 |
|
Particular features are example space |
|
|
|
00:39:46.810 --> 00:39:47.240 |
|
we have. |
|
|
|
00:39:49.810 --> 00:39:50.980 |
|
Sorry, can you ask it again? |
|
|
|
00:39:50.980 --> 00:39:52.100 |
|
I'm not sure if I got it. |
|
|
|
00:39:52.590 --> 00:39:55.120 |
|
Understand something so we know that we |
|
|
|
00:39:55.120 --> 00:39:56.250 |
|
need to project it in different |
|
|
|
00:39:56.250 --> 00:39:58.610 |
|
dimensions to classify that properly. |
|
|
|
00:39:58.610 --> 00:40:01.210 |
|
In the previous example like so we said |
|
|
|
00:40:01.210 --> 00:40:02.100 |
|
we discussed right? |
|
|
|
00:40:02.100 --> 00:40:04.486 |
|
So how can we very sure what is the |
|
|
|
00:40:04.486 --> 00:40:05.850 |
|
minimum our minimum dimension? |
|
|
|
00:40:05.850 --> 00:40:08.659 |
|
So the question is how do you know what |
|
|
|
00:40:08.660 --> 00:40:10.700 |
|
kernel you should use or how high you |
|
|
|
00:40:10.700 --> 00:40:12.400 |
|
should project the data right? |
|
|
|
00:40:12.980 --> 00:40:15.750 |
|
Yeah, that that's a problem that you |
|
|
|
00:40:15.750 --> 00:40:17.523 |
|
don't really know, so you have to try. |
|
|
|
00:40:17.523 --> 00:40:19.350 |
|
You can try different things and then |
|
|
|
00:40:19.350 --> 00:40:21.200 |
|
you use your validation set to choose |
|
|
|
00:40:21.200 --> 00:40:21.950 |
|
the best model. |
|
|
|
00:40:22.930 --> 00:40:26.350 |
|
But that's a downside of SVMS that |
|
|
|
00:40:26.350 --> 00:40:29.960 |
|
since the optimization for big data set |
|
|
|
00:40:29.960 --> 00:40:32.700 |
|
can be pretty slow if you're using a |
|
|
|
00:40:32.700 --> 00:40:33.120 |
|
kernel. |
|
|
|
00:40:33.790 --> 00:40:36.000 |
|
And so it can be very time consuming to |
|
|
|
00:40:36.000 --> 00:40:37.410 |
|
try to search through all the different |
|
|
|
00:40:37.410 --> 00:40:38.620 |
|
parameters and different types of |
|
|
|
00:40:38.620 --> 00:40:39.700 |
|
kernels that you could use. |
|
|
|
00:40:41.420 --> 00:40:44.310 |
|
There's another trick which you could |
|
|
|
00:40:44.310 --> 00:40:46.230 |
|
do, which is like you take a random |
|
|
|
00:40:46.230 --> 00:40:47.150 |
|
forest. |
|
|
|
00:40:48.650 --> 00:40:51.300 |
|
And you take the leaf node that each |
|
|
|
00:40:51.300 --> 00:40:53.632 |
|
data point falls into as a binary |
|
|
|
00:40:53.632 --> 00:40:55.690 |
|
variable, so it'll be a sparse binary |
|
|
|
00:40:55.690 --> 00:40:56.140 |
|
variable. |
|
|
|
00:40:56.920 --> 00:40:58.230 |
|
And then you can apply your linear |
|
|
|
00:40:58.230 --> 00:40:59.690 |
|
classifier to it. |
|
|
|
00:40:59.690 --> 00:41:01.480 |
|
So then you're like mapping it into |
|
|
|
00:41:01.480 --> 00:41:03.650 |
|
this high dimensional space that kind |
|
|
|
00:41:03.650 --> 00:41:05.540 |
|
of takes into account the feature |
|
|
|
00:41:05.540 --> 00:41:08.800 |
|
structure and where the data should be |
|
|
|
00:41:08.800 --> 00:41:10.190 |
|
like pretty linearly separable. |
|
|
|
00:41:16.350 --> 00:41:19.396 |
|
So in summary of the kernels for |
|
|
|
00:41:19.396 --> 00:41:21.560 |
|
kernels you can learn the classifiers |
|
|
|
00:41:21.560 --> 00:41:23.120 |
|
in high dimensional feature spaces |
|
|
|
00:41:23.120 --> 00:41:24.705 |
|
without actually having to map them |
|
|
|
00:41:24.705 --> 00:41:25.090 |
|
there. |
|
|
|
00:41:25.090 --> 00:41:26.380 |
|
We did for the polynomial. |
|
|
|
00:41:26.380 --> 00:41:28.898 |
|
The data can be linearly separable in |
|
|
|
00:41:28.898 --> 00:41:30.229 |
|
the high dimensional space. |
|
|
|
00:41:30.230 --> 00:41:31.796 |
|
Even if it weren't highly separable, |
|
|
|
00:41:31.796 --> 00:41:34.029 |
|
wasn't wasn't there weren't actually |
|
|
|
00:41:34.029 --> 00:41:36.150 |
|
separable in the original feature |
|
|
|
00:41:36.150 --> 00:41:36.520 |
|
space. |
|
|
|
00:41:37.530 --> 00:41:40.830 |
|
And you can use the kernel for an SVM, |
|
|
|
00:41:40.830 --> 00:41:42.760 |
|
but the concept of kernels it's also |
|
|
|
00:41:42.760 --> 00:41:44.620 |
|
used in other learning algorithms, so |
|
|
|
00:41:44.620 --> 00:41:46.200 |
|
it's just like a general concept worth |
|
|
|
00:41:46.200 --> 00:41:46.710 |
|
knowing. |
|
|
|
00:41:48.530 --> 00:41:51.890 |
|
All right, so it's time for a stretch |
|
|
|
00:41:51.890 --> 00:41:52.750 |
|
break. |
|
|
|
00:41:53.910 --> 00:41:56.160 |
|
And you can think about this question |
|
|
|
00:41:56.160 --> 00:41:58.130 |
|
if you were to remove a support vector |
|
|
|
00:41:58.130 --> 00:41:59.600 |
|
from the training set with the decision |
|
|
|
00:41:59.600 --> 00:42:00.560 |
|
boundary change. |
|
|
|
00:42:01.200 --> 00:42:03.799 |
|
And then after 2 minutes I'll give the |
|
|
|
00:42:03.800 --> 00:42:06.150 |
|
answer to that and then I'll give an |
|
|
|
00:42:06.150 --> 00:42:08.360 |
|
application example and talk about the |
|
|
|
00:42:08.360 --> 00:42:09.380 |
|
Pegasus algorithm. |
|
|
|
00:44:27.710 --> 00:44:30.520 |
|
So what's the answer to this? |
|
|
|
00:44:30.520 --> 00:44:32.510 |
|
If I were to remove one of these |
|
|
|
00:44:32.510 --> 00:44:35.240 |
|
examples, here is my decision boundary. |
|
|
|
00:44:35.240 --> 00:44:36.540 |
|
You're going to change or not? |
|
|
|
00:44:38.300 --> 00:44:40.580 |
|
Yeah, it will change right? |
|
|
|
00:44:40.580 --> 00:44:42.120 |
|
If I moved any of the other ones, it |
|
|
|
00:44:42.120 --> 00:44:42.760 |
|
wouldn't change. |
|
|
|
00:44:42.760 --> 00:44:43.979 |
|
But if I remove one of the support |
|
|
|
00:44:43.980 --> 00:44:45.655 |
|
vectors it's going to change because my |
|
|
|
00:44:45.655 --> 00:44:46.315 |
|
support is changing. |
|
|
|
00:44:46.315 --> 00:44:49.144 |
|
So if I remove this for example, then I |
|
|
|
00:44:49.144 --> 00:44:51.328 |
|
think the line would like tilt this way |
|
|
|
00:44:51.328 --> 00:44:53.944 |
|
so that it would depend on that X and |
|
|
|
00:44:53.944 --> 00:44:54.651 |
|
this X. |
|
|
|
00:44:54.651 --> 00:44:58.186 |
|
And if I remove this O then I think it |
|
|
|
00:44:58.186 --> 00:45:00.240 |
|
would shift down this way so that it |
|
|
|
00:45:00.240 --> 00:45:02.020 |
|
depends on this O and these X's. |
|
|
|
00:45:02.660 --> 00:45:04.970 |
|
Birds find some boundary where three of |
|
|
|
00:45:04.970 --> 00:45:06.920 |
|
those points are equidistant, 2 on one |
|
|
|
00:45:06.920 --> 00:45:07.840 |
|
side and 1 on the other. |
|
|
|
00:45:12.630 --> 00:45:14.120 |
|
Alright, so I'm going to give you an |
|
|
|
00:45:14.120 --> 00:45:15.920 |
|
example of how it's used, and you may |
|
|
|
00:45:15.920 --> 00:45:17.862 |
|
notice that almost all the examples are |
|
|
|
00:45:17.862 --> 00:45:19.570 |
|
computer vision, and that's because I |
|
|
|
00:45:19.570 --> 00:45:21.431 |
|
know a lot of computer vision and so |
|
|
|
00:45:21.431 --> 00:45:22.700 |
|
that's always what occurs to me. |
|
|
|
00:45:24.630 --> 00:45:29.090 |
|
But this is an object detection case, |
|
|
|
00:45:29.090 --> 00:45:29.760 |
|
so. |
|
|
|
00:45:30.620 --> 00:45:33.770 |
|
The method here it's like called |
|
|
|
00:45:33.770 --> 00:45:35.790 |
|
sliding window object detection which |
|
|
|
00:45:35.790 --> 00:45:37.370 |
|
you can visualize it as like you have |
|
|
|
00:45:37.370 --> 00:45:38.853 |
|
some image and you take a little window |
|
|
|
00:45:38.853 --> 00:45:41.230 |
|
and you slide it across the image and |
|
|
|
00:45:41.230 --> 00:45:43.250 |
|
you extract a patch at each position. |
|
|
|
00:45:44.180 --> 00:45:45.990 |
|
And then you rescale the image and do |
|
|
|
00:45:45.990 --> 00:45:46.550 |
|
it again. |
|
|
|
00:45:46.550 --> 00:45:48.467 |
|
So you end up with like a whole. |
|
|
|
00:45:48.467 --> 00:45:50.290 |
|
You turn the image into a whole bunch |
|
|
|
00:45:50.290 --> 00:45:53.290 |
|
of different patches of the same size. |
|
|
|
00:45:54.400 --> 00:45:56.830 |
|
After rescaling them, but that |
|
|
|
00:45:56.830 --> 00:45:59.690 |
|
correspond to different different |
|
|
|
00:45:59.690 --> 00:46:01.650 |
|
overlapping patches at different |
|
|
|
00:46:01.650 --> 00:46:03.170 |
|
positions and scales in the original |
|
|
|
00:46:03.170 --> 00:46:03.550 |
|
image. |
|
|
|
00:46:04.270 --> 00:46:06.360 |
|
And then for each of those patches you |
|
|
|
00:46:06.360 --> 00:46:08.840 |
|
have to classify it as being the object |
|
|
|
00:46:08.840 --> 00:46:10.470 |
|
of interest or not, in this case of |
|
|
|
00:46:10.470 --> 00:46:11.120 |
|
pedestrian. |
|
|
|
00:46:12.070 --> 00:46:14.830 |
|
Where pedestrian just means person. |
|
|
|
00:46:14.830 --> 00:46:16.970 |
|
These aren't actually necessarily |
|
|
|
00:46:16.970 --> 00:46:18.480 |
|
pedestrians like this guy's not on the |
|
|
|
00:46:18.480 --> 00:46:19.000 |
|
road, but. |
|
|
|
00:46:19.960 --> 00:46:20.846 |
|
This person. |
|
|
|
00:46:20.846 --> 00:46:24.290 |
|
So these are all examples of patches |
|
|
|
00:46:24.290 --> 00:46:26.126 |
|
that you would want to classify as a |
|
|
|
00:46:26.126 --> 00:46:26.464 |
|
person. |
|
|
|
00:46:26.464 --> 00:46:28.490 |
|
So you can see it's kind of difficult |
|
|
|
00:46:28.490 --> 00:46:30.190 |
|
because there could be lots of |
|
|
|
00:46:30.190 --> 00:46:31.880 |
|
different backgrounds or other people |
|
|
|
00:46:31.880 --> 00:46:34.030 |
|
in the way and you have to distinguish |
|
|
|
00:46:34.030 --> 00:46:36.580 |
|
it from like a fire hydrant that's like |
|
|
|
00:46:36.580 --> 00:46:37.953 |
|
pretty far away and looks kind of |
|
|
|
00:46:37.953 --> 00:46:39.420 |
|
person like or a lamp post. |
|
|
|
00:46:42.390 --> 00:46:45.400 |
|
This method is to like extract |
|
|
|
00:46:45.400 --> 00:46:46.330 |
|
features. |
|
|
|
00:46:46.330 --> 00:46:48.060 |
|
Basically you normalize the colors, |
|
|
|
00:46:48.060 --> 00:46:49.730 |
|
compute gradients, compute the gradient |
|
|
|
00:46:49.730 --> 00:46:50.340 |
|
orientation. |
|
|
|
00:46:50.340 --> 00:46:51.550 |
|
I'll show you an illustration in the |
|
|
|
00:46:51.550 --> 00:46:53.760 |
|
next slide and then you apply a linear |
|
|
|
00:46:53.760 --> 00:46:54.290 |
|
SVM. |
|
|
|
00:46:55.040 --> 00:46:56.450 |
|
And so for each of these windows you |
|
|
|
00:46:56.450 --> 00:46:57.902 |
|
want to say it's a person or not a |
|
|
|
00:46:57.902 --> 00:46:58.098 |
|
person. |
|
|
|
00:46:58.098 --> 00:46:59.840 |
|
So you train on some training set of |
|
|
|
00:46:59.840 --> 00:47:01.400 |
|
images where you have some people that |
|
|
|
00:47:01.400 --> 00:47:02.100 |
|
are annotated. |
|
|
|
00:47:02.770 --> 00:47:04.650 |
|
And then you test on some held out set. |
|
|
|
00:47:06.300 --> 00:47:09.515 |
|
So this is the feature representation. |
|
|
|
00:47:09.515 --> 00:47:11.920 |
|
It's basically like where are the edges |
|
|
|
00:47:11.920 --> 00:47:14.170 |
|
and the image and the patch and how |
|
|
|
00:47:14.170 --> 00:47:15.470 |
|
strong are they and what are their |
|
|
|
00:47:15.470 --> 00:47:16.185 |
|
orientations. |
|
|
|
00:47:16.185 --> 00:47:18.460 |
|
It's called a hog or histogram of |
|
|
|
00:47:18.460 --> 00:47:20.460 |
|
gradients representation. |
|
|
|
00:47:21.200 --> 00:47:23.930 |
|
And this paper is cited over 40,000 |
|
|
|
00:47:23.930 --> 00:47:24.610 |
|
times. |
|
|
|
00:47:24.610 --> 00:47:26.670 |
|
It's mostly for the hog features, but |
|
|
|
00:47:26.670 --> 00:47:28.790 |
|
it was also the most effective person |
|
|
|
00:47:28.790 --> 00:47:29.840 |
|
detector for a while. |
|
|
|
00:47:34.610 --> 00:47:38.876 |
|
So it it's very effective. |
|
|
|
00:47:38.876 --> 00:47:42.730 |
|
So these plots are the X axis is the |
|
|
|
00:47:42.730 --> 00:47:44.432 |
|
number of false positives per window. |
|
|
|
00:47:44.432 --> 00:47:47.180 |
|
So it's a chance that you misclassify |
|
|
|
00:47:47.180 --> 00:47:49.040 |
|
one of these windows as a person when |
|
|
|
00:47:49.040 --> 00:47:50.117 |
|
it's not really a person. |
|
|
|
00:47:50.117 --> 00:47:52.460 |
|
It's like a fire hydrant or random |
|
|
|
00:47:52.460 --> 00:47:53.520 |
|
leaves or something else. |
|
|
|
00:47:54.660 --> 00:47:58.600 |
|
X axis, Y axis is the miss rate, which |
|
|
|
00:47:58.600 --> 00:48:01.480 |
|
is the number of true people that you |
|
|
|
00:48:01.480 --> 00:48:02.440 |
|
fail to detect. |
|
|
|
00:48:03.080 --> 00:48:05.160 |
|
So the fact that it's way down here |
|
|
|
00:48:05.160 --> 00:48:07.560 |
|
basically means that it never makes any |
|
|
|
00:48:07.560 --> 00:48:09.630 |
|
mistakes on this data set, so it can |
|
|
|
00:48:09.630 --> 00:48:13.110 |
|
classify it gets 99.8% of the fines, |
|
|
|
00:48:13.110 --> 00:48:16.730 |
|
99.8% of the people, and almost never |
|
|
|
00:48:16.730 --> 00:48:17.860 |
|
has false positives. |
|
|
|
00:48:18.900 --> 00:48:20.400 |
|
That was on this MIT database. |
|
|
|
00:48:21.040 --> 00:48:23.154 |
|
Then there's another data set which was |
|
|
|
00:48:23.154 --> 00:48:25.140 |
|
like more, which was harder. |
|
|
|
00:48:25.140 --> 00:48:27.490 |
|
Those were the examples I showed of St. |
|
|
|
00:48:27.490 --> 00:48:29.170 |
|
scenes and more crowded scenes. |
|
|
|
00:48:29.860 --> 00:48:32.870 |
|
And they're the previous approaches had |
|
|
|
00:48:32.870 --> 00:48:35.230 |
|
like pretty high false positive rates. |
|
|
|
00:48:35.230 --> 00:48:38.340 |
|
So as a rule of thumb I would say |
|
|
|
00:48:38.340 --> 00:48:43.090 |
|
there's typically about 10,000 windows |
|
|
|
00:48:43.090 --> 00:48:43.780 |
|
per image. |
|
|
|
00:48:44.480 --> 00:48:46.427 |
|
So if you have like a false positive |
|
|
|
00:48:46.427 --> 00:48:48.755 |
|
rate of 10 to the -, 4, that means that |
|
|
|
00:48:48.755 --> 00:48:50.555 |
|
you make one mistake on every single |
|
|
|
00:48:50.555 --> 00:48:50.920 |
|
image. |
|
|
|
00:48:50.920 --> 00:48:51.650 |
|
On average. |
|
|
|
00:48:51.650 --> 00:48:53.400 |
|
You like think that there's one person |
|
|
|
00:48:53.400 --> 00:48:55.080 |
|
where there isn't anybody on average |
|
|
|
00:48:55.080 --> 00:48:55.985 |
|
once per image. |
|
|
|
00:48:55.985 --> 00:48:57.410 |
|
So that's kind of a that's an |
|
|
|
00:48:57.410 --> 00:48:58.490 |
|
unacceptable rate. |
|
|
|
00:48:59.950 --> 00:49:02.723 |
|
But this method is able to get like 10 |
|
|
|
00:49:02.723 --> 00:49:06.380 |
|
to the -, 6 which is a pretty good rate |
|
|
|
00:49:06.380 --> 00:49:09.230 |
|
and still find like 70% of the people. |
|
|
|
00:49:10.030 --> 00:49:11.400 |
|
So these like. |
|
|
|
00:49:12.320 --> 00:49:14.665 |
|
These curves that are clustered here |
|
|
|
00:49:14.665 --> 00:49:17.020 |
|
are all different SVMS. |
|
|
|
00:49:17.020 --> 00:49:20.970 |
|
Linear SVMS, they also do. |
|
|
|
00:49:21.040 --> 00:49:21.800 |
|
|
|
|
|
00:49:22.760 --> 00:49:23.060 |
|
Weight. |
|
|
|
00:49:23.060 --> 00:49:23.930 |
|
Linear. |
|
|
|
00:49:23.930 --> 00:49:25.860 |
|
Yeah, so the black one here is a |
|
|
|
00:49:25.860 --> 00:49:28.110 |
|
kernelized SVM, which performs very |
|
|
|
00:49:28.110 --> 00:49:30.130 |
|
similarly, but takes a lot longer to |
|
|
|
00:49:30.130 --> 00:49:32.340 |
|
train and do inference, so it wouldn't |
|
|
|
00:49:32.340 --> 00:49:32.890 |
|
be referred. |
|
|
|
00:49:33.880 --> 00:49:35.790 |
|
And then the other previous approaches |
|
|
|
00:49:35.790 --> 00:49:36.870 |
|
are doing worse. |
|
|
|
00:49:36.870 --> 00:49:38.500 |
|
They have like higher false positives |
|
|
|
00:49:38.500 --> 00:49:39.960 |
|
rates for the same detection rate. |
|
|
|
00:49:42.860 --> 00:49:44.470 |
|
So that was just that was just one |
|
|
|
00:49:44.470 --> 00:49:46.832 |
|
example, but as I said like SVMS where |
|
|
|
00:49:46.832 --> 00:49:49.080 |
|
the dominant I think the most commonly |
|
|
|
00:49:49.080 --> 00:49:50.903 |
|
used, I wouldn't say dominant, but most |
|
|
|
00:49:50.903 --> 00:49:53.510 |
|
commonly used classifier for several |
|
|
|
00:49:53.510 --> 00:49:53.930 |
|
years. |
|
|
|
00:49:56.330 --> 00:49:58.440 |
|
So SVMS are good broadly applicable |
|
|
|
00:49:58.440 --> 00:49:58.782 |
|
classifier. |
|
|
|
00:49:58.782 --> 00:50:00.780 |
|
They have a strong foundation in |
|
|
|
00:50:00.780 --> 00:50:01.970 |
|
statistical learning theory. |
|
|
|
00:50:01.970 --> 00:50:04.000 |
|
They work even if you have a lot of |
|
|
|
00:50:04.000 --> 00:50:05.480 |
|
weak features. |
|
|
|
00:50:05.480 --> 00:50:08.400 |
|
You do have to tune the parameters like |
|
|
|
00:50:08.400 --> 00:50:10.470 |
|
C and that can be time consuming. |
|
|
|
00:50:11.160 --> 00:50:13.390 |
|
And if you're using nonlinear SVM, then |
|
|
|
00:50:13.390 --> 00:50:14.817 |
|
you have to decide what kernel function |
|
|
|
00:50:14.817 --> 00:50:16.560 |
|
you're going to use, which may involve |
|
|
|
00:50:16.560 --> 00:50:19.010 |
|
even more tuning in it, and it means |
|
|
|
00:50:19.010 --> 00:50:20.150 |
|
that it's going to be a slow |
|
|
|
00:50:20.150 --> 00:50:21.940 |
|
optimization and slower inference. |
|
|
|
00:50:22.860 --> 00:50:24.680 |
|
The main negatives of SVM, the |
|
|
|
00:50:24.680 --> 00:50:25.550 |
|
downsides. |
|
|
|
00:50:25.550 --> 00:50:27.160 |
|
It doesn't have feature learning as |
|
|
|
00:50:27.160 --> 00:50:29.580 |
|
part of the framework, where trees for |
|
|
|
00:50:29.580 --> 00:50:30.750 |
|
example, you're kind of learning |
|
|
|
00:50:30.750 --> 00:50:32.620 |
|
features and for neural Nets you are as |
|
|
|
00:50:32.620 --> 00:50:32.930 |
|
well. |
|
|
|
00:50:33.770 --> 00:50:38.430 |
|
And it also can took could be very slow |
|
|
|
00:50:38.430 --> 00:50:39.010 |
|
to train. |
|
|
|
00:50:40.290 --> 00:50:42.930 |
|
Until Pegasus, which is the next thing |
|
|
|
00:50:42.930 --> 00:50:44.510 |
|
that I'm talking about, South, this was |
|
|
|
00:50:44.510 --> 00:50:46.660 |
|
like a much faster and simpler way to |
|
|
|
00:50:46.660 --> 00:50:47.790 |
|
train these algorithms. |
|
|
|
00:50:49.220 --> 00:50:50.755 |
|
So I'm not going to talk about the bad |
|
|
|
00:50:50.755 --> 00:50:53.270 |
|
ways or they're slow ways to optimize |
|
|
|
00:50:53.270 --> 00:50:53.380 |
|
it. |
|
|
|
00:50:54.360 --> 00:50:56.750 |
|
So this is so the next thing I'm going |
|
|
|
00:50:56.750 --> 00:50:57.710 |
|
to talk about. |
|
|
|
00:50:57.980 --> 00:51:01.350 |
|
Is called Pegasus which is how you can |
|
|
|
00:51:01.350 --> 00:51:04.100 |
|
optimize the SVM and it stands for |
|
|
|
00:51:04.100 --> 00:51:06.510 |
|
primal estimated subgradient solver for |
|
|
|
00:51:06.510 --> 00:51:07.360 |
|
SVM, so. |
|
|
|
00:51:09.020 --> 00:51:11.095 |
|
Primal because you're solving it in the |
|
|
|
00:51:11.095 --> 00:51:12.660 |
|
primal formulation where you're |
|
|
|
00:51:12.660 --> 00:51:14.540 |
|
minimizing the weights and the margin. |
|
|
|
00:51:15.460 --> 00:51:16.840 |
|
Estimated because that's where you're. |
|
|
|
00:51:17.900 --> 00:51:20.090 |
|
The subgradient is because you're going |
|
|
|
00:51:20.090 --> 00:51:21.860 |
|
to you're going to make decisions based |
|
|
|
00:51:21.860 --> 00:51:24.970 |
|
on a subsample of the training data. |
|
|
|
00:51:24.970 --> 00:51:27.030 |
|
So you're trying to take a step in the |
|
|
|
00:51:27.030 --> 00:51:29.000 |
|
right direction based on a few training |
|
|
|
00:51:29.000 --> 00:51:31.710 |
|
examples to solver for SVM. |
|
|
|
00:51:33.540 --> 00:51:36.790 |
|
I found out yesterday when I was look |
|
|
|
00:51:36.790 --> 00:51:39.460 |
|
searching for the paper that Pegasus is |
|
|
|
00:51:39.460 --> 00:51:42.260 |
|
also like an assisted suicide system in |
|
|
|
00:51:42.260 --> 00:51:42.880 |
|
Switzerland. |
|
|
|
00:51:42.880 --> 00:51:45.420 |
|
So it's kind of an unfortunate name, |
|
|
|
00:51:45.420 --> 00:51:46.820 |
|
unfortunate acronym. |
|
|
|
00:51:48.920 --> 00:51:49.520 |
|
And. |
|
|
|
00:51:50.550 --> 00:51:54.150 |
|
So the so this is the SVM problem that |
|
|
|
00:51:54.150 --> 00:51:56.160 |
|
we want to solve, minimize the weights |
|
|
|
00:51:56.160 --> 00:52:00.000 |
|
and while also minimizing the hinge |
|
|
|
00:52:00.000 --> 00:52:01.260 |
|
loss on all the samples. |
|
|
|
00:52:02.510 --> 00:52:04.200 |
|
But we can reframe this. |
|
|
|
00:52:04.200 --> 00:52:06.780 |
|
We can reframe it in terms of one |
|
|
|
00:52:06.780 --> 00:52:07.110 |
|
example. |
|
|
|
00:52:07.110 --> 00:52:09.297 |
|
So we could say, well, let's say we |
|
|
|
00:52:09.297 --> 00:52:10.870 |
|
want to minimize the weights and we |
|
|
|
00:52:10.870 --> 00:52:12.706 |
|
want to minimize the loss for one |
|
|
|
00:52:12.706 --> 00:52:13.079 |
|
example. |
|
|
|
00:52:14.410 --> 00:52:17.200 |
|
Then we can ask like how would I change |
|
|
|
00:52:17.200 --> 00:52:19.630 |
|
the weights if that were my objective? |
|
|
|
00:52:19.630 --> 00:52:21.897 |
|
And if you want to know how you can |
|
|
|
00:52:21.897 --> 00:52:23.913 |
|
improve something, improve some |
|
|
|
00:52:23.913 --> 00:52:25.230 |
|
objective with respect to some |
|
|
|
00:52:25.230 --> 00:52:25.780 |
|
variable. |
|
|
|
00:52:26.670 --> 00:52:27.890 |
|
Then what you do is you take the |
|
|
|
00:52:27.890 --> 00:52:30.260 |
|
partial derivative of the objective |
|
|
|
00:52:30.260 --> 00:52:33.330 |
|
with respect to the variable, and if |
|
|
|
00:52:33.330 --> 00:52:35.285 |
|
you want the objective to go down, this |
|
|
|
00:52:35.285 --> 00:52:36.440 |
|
is like a loss function. |
|
|
|
00:52:36.440 --> 00:52:38.090 |
|
So we wanted to go down. |
|
|
|
00:52:38.090 --> 00:52:40.763 |
|
So I want to find the derivative with |
|
|
|
00:52:40.763 --> 00:52:42.670 |
|
respect to my variable, in this case |
|
|
|
00:52:42.670 --> 00:52:45.680 |
|
the weights, and I want to take a small |
|
|
|
00:52:45.680 --> 00:52:47.450 |
|
step in the negative direction of that |
|
|
|
00:52:47.450 --> 00:52:49.262 |
|
gradient of that derivative. |
|
|
|
00:52:49.262 --> 00:52:51.750 |
|
So that will make my objective just a |
|
|
|
00:52:51.750 --> 00:52:52.400 |
|
little bit better. |
|
|
|
00:52:52.400 --> 00:52:53.990 |
|
It'll make my loss a little bit lower. |
|
|
|
00:52:56.470 --> 00:52:58.690 |
|
And if I compute the gradient of this |
|
|
|
00:52:58.690 --> 00:53:01.440 |
|
objective with respect to West. |
|
|
|
00:53:02.110 --> 00:53:06.610 |
|
So the gradient of West squared is just |
|
|
|
00:53:06.610 --> 00:53:10.502 |
|
is just two WI mean and also the |
|
|
|
00:53:10.502 --> 00:53:11.210 |
|
gradient of. |
|
|
|
00:53:11.210 --> 00:53:13.360 |
|
Again vector math like. |
|
|
|
00:53:13.360 --> 00:53:15.750 |
|
You might not be familiar with doing |
|
|
|
00:53:15.750 --> 00:53:17.440 |
|
like gradients of vectors and stuff, |
|
|
|
00:53:17.440 --> 00:53:19.350 |
|
but it often works out kind of |
|
|
|
00:53:19.350 --> 00:53:20.800 |
|
analogous to the scalars. |
|
|
|
00:53:20.800 --> 00:53:23.385 |
|
So the gradient of W transpose W is |
|
|
|
00:53:23.385 --> 00:53:24.130 |
|
also W. |
|
|
|
00:53:26.290 --> 00:53:29.515 |
|
This loss function is this margin which |
|
|
|
00:53:29.515 --> 00:53:30.850 |
|
is just Y of. |
|
|
|
00:53:30.850 --> 00:53:32.650 |
|
This is like a dot product W transpose |
|
|
|
00:53:32.650 --> 00:53:33.000 |
|
X. |
|
|
|
00:53:34.380 --> 00:53:36.690 |
|
So the gradient of this with respect to |
|
|
|
00:53:36.690 --> 00:53:39.770 |
|
West is. |
|
|
|
00:53:39.830 --> 00:53:41.590 |
|
Negative YX, right? |
|
|
|
00:53:42.320 --> 00:53:45.880 |
|
And so my gradient if I've got this Max |
|
|
|
00:53:45.880 --> 00:53:46.570 |
|
here as well. |
|
|
|
00:53:46.570 --> 00:53:49.260 |
|
So that means that if I'm already like |
|
|
|
00:53:49.260 --> 00:53:50.890 |
|
confidently correct, then I have no |
|
|
|
00:53:50.890 --> 00:53:52.780 |
|
loss so my gradient is 0. |
|
|
|
00:53:53.620 --> 00:53:55.800 |
|
If I'm not confidently correct, if I'm |
|
|
|
00:53:55.800 --> 00:53:58.380 |
|
within the margin of 1 then I have this |
|
|
|
00:53:58.380 --> 00:54:01.630 |
|
loss and the size of this. |
|
|
|
00:54:03.400 --> 00:54:06.490 |
|
The size of the size of the gradient. |
|
|
|
00:54:07.180 --> 00:54:11.210 |
|
Is just one, has a magnitude of 1 and |
|
|
|
00:54:11.210 --> 00:54:13.750 |
|
the direction because my hinge loss has |
|
|
|
00:54:13.750 --> 00:54:14.320 |
|
this. |
|
|
|
00:54:15.400 --> 00:54:17.315 |
|
So the size do the hinge loss is just |
|
|
|
00:54:17.315 --> 00:54:18.900 |
|
one because the hinge loss just has a |
|
|
|
00:54:18.900 --> 00:54:20.250 |
|
gradient of 1, it's just a straight |
|
|
|
00:54:20.250 --> 00:54:20.550 |
|
line. |
|
|
|
00:54:21.620 --> 00:54:24.950 |
|
And then the of this is YX, right? |
|
|
|
00:54:24.950 --> 00:54:28.825 |
|
The gradient of YW transpose X is YX |
|
|
|
00:54:28.825 --> 00:54:31.797 |
|
and so I get this gradient here, which |
|
|
|
00:54:31.797 --> 00:54:35.696 |
|
is it's a 0 if my margin is good enough |
|
|
|
00:54:35.696 --> 00:54:36.963 |
|
and it's a one. |
|
|
|
00:54:36.963 --> 00:54:40.300 |
|
This term is A1 if I'm under the |
|
|
|
00:54:40.300 --> 00:54:40.630 |
|
margin. |
|
|
|
00:54:41.520 --> 00:54:44.500 |
|
Times Y which is one or - 1 depending |
|
|
|
00:54:44.500 --> 00:54:46.419 |
|
on the label, times X which is the |
|
|
|
00:54:46.420 --> 00:54:47.060 |
|
feature vector. |
|
|
|
00:54:47.900 --> 00:54:48.830 |
|
So in other words. |
|
|
|
00:54:49.930 --> 00:54:52.720 |
|
If I'm not happy with my score right |
|
|
|
00:54:52.720 --> 00:54:56.070 |
|
now and let's say let's say W transpose |
|
|
|
00:54:56.070 --> 00:54:58.690 |
|
X is oh .5 and y = 1. |
|
|
|
00:54:59.660 --> 00:55:02.116 |
|
And let's say that X is positive, then |
|
|
|
00:55:02.116 --> 00:55:06.612 |
|
I want to increase WA bit and if I |
|
|
|
00:55:06.612 --> 00:55:09.710 |
|
increase WA bit then I'm going to. |
|
|
|
00:55:10.070 --> 00:55:13.230 |
|
Increase my score or increase like the |
|
|
|
00:55:13.230 --> 00:55:16.060 |
|
output of my linear model, which will |
|
|
|
00:55:16.060 --> 00:55:18.380 |
|
then better satisfy the margin. |
|
|
|
00:55:21.030 --> 00:55:23.160 |
|
And then I'm going to take. |
|
|
|
00:55:23.160 --> 00:55:25.380 |
|
So this is just the gradient here |
|
|
|
00:55:25.380 --> 00:55:27.760 |
|
Lambda times W Plus this thing that I |
|
|
|
00:55:27.760 --> 00:55:28.740 |
|
just talked about. |
|
|
|
00:55:30.920 --> 00:55:32.630 |
|
So we're going to use this to do what's |
|
|
|
00:55:32.630 --> 00:55:34.300 |
|
called gradient descent. |
|
|
|
00:55:35.500 --> 00:55:37.820 |
|
SGD stands for stochastic gradient |
|
|
|
00:55:37.820 --> 00:55:38.310 |
|
descent. |
|
|
|
00:55:39.280 --> 00:55:41.050 |
|
And I'll explain what stochastic, why |
|
|
|
00:55:41.050 --> 00:55:43.420 |
|
it's stochastic, and a little bit. |
|
|
|
00:55:43.420 --> 00:55:45.690 |
|
But this is like a nice illustration of |
|
|
|
00:55:45.690 --> 00:55:47.990 |
|
gradient descent, basically. |
|
|
|
00:55:48.700 --> 00:55:50.213 |
|
You visualize. |
|
|
|
00:55:50.213 --> 00:55:52.600 |
|
You can mentally visualize it as you've |
|
|
|
00:55:52.600 --> 00:55:53.270 |
|
got some. |
|
|
|
00:55:54.370 --> 00:55:56.200 |
|
You've got some surface of your loss |
|
|
|
00:55:56.200 --> 00:55:58.070 |
|
function, so depending on what your |
|
|
|
00:55:58.070 --> 00:55:59.630 |
|
model is, you would have different |
|
|
|
00:55:59.630 --> 00:56:00.220 |
|
losses. |
|
|
|
00:56:00.950 --> 00:56:02.500 |
|
And so here it's just like if your |
|
|
|
00:56:02.500 --> 00:56:04.600 |
|
model just has two parameters, then you |
|
|
|
00:56:04.600 --> 00:56:07.400 |
|
can visualize this as like a 3D surface |
|
|
|
00:56:07.400 --> 00:56:09.070 |
|
where the height is your loss. |
|
|
|
00:56:09.730 --> 00:56:13.420 |
|
And the position XY position on this is |
|
|
|
00:56:13.420 --> 00:56:14.950 |
|
the parameters. |
|
|
|
00:56:16.390 --> 00:56:17.730 |
|
And gradient descent, you're just |
|
|
|
00:56:17.730 --> 00:56:19.269 |
|
trying to roll down the hill. |
|
|
|
00:56:19.270 --> 00:56:20.590 |
|
That's why I had a ball rolling down |
|
|
|
00:56:20.590 --> 00:56:21.950 |
|
the hill on the first slide. |
|
|
|
00:56:22.510 --> 00:56:25.710 |
|
And you try to every position you |
|
|
|
00:56:25.710 --> 00:56:26.990 |
|
calculate gradient. |
|
|
|
00:56:26.990 --> 00:56:29.070 |
|
That's the direction of the slope and |
|
|
|
00:56:29.070 --> 00:56:29.830 |
|
its speed. |
|
|
|
00:56:30.430 --> 00:56:32.240 |
|
And then you take a little step in the |
|
|
|
00:56:32.240 --> 00:56:34.020 |
|
direction of that gradient downward. |
|
|
|
00:56:35.560 --> 00:56:38.370 |
|
And there's a common terms that you'll |
|
|
|
00:56:38.370 --> 00:56:40.532 |
|
hear in this kind of optimization are |
|
|
|
00:56:40.532 --> 00:56:43.300 |
|
like global optimum and local optimum. |
|
|
|
00:56:43.300 --> 00:56:45.956 |
|
So a global optimum is the lowest point |
|
|
|
00:56:45.956 --> 00:56:48.780 |
|
in the whole like surface of solutions. |
|
|
|
00:56:49.890 --> 00:56:51.660 |
|
That's where you want to go in. |
|
|
|
00:56:51.660 --> 00:56:54.606 |
|
A local optimum means that if you have |
|
|
|
00:56:54.606 --> 00:56:56.960 |
|
that solution then you can't improve it |
|
|
|
00:56:56.960 --> 00:56:58.840 |
|
by taking a small step anywhere. |
|
|
|
00:56:58.840 --> 00:57:00.460 |
|
So you have to go up the hill before |
|
|
|
00:57:00.460 --> 00:57:01.320 |
|
you can go down the hill. |
|
|
|
00:57:02.030 --> 00:57:04.613 |
|
So this is a global optimum here and |
|
|
|
00:57:04.613 --> 00:57:06.329 |
|
this is a local optimum. |
|
|
|
00:57:06.330 --> 00:57:09.720 |
|
Now SVMS, SVMS are just like a big |
|
|
|
00:57:09.720 --> 00:57:10.430 |
|
bowl. |
|
|
|
00:57:10.430 --> 00:57:11.650 |
|
They are convex. |
|
|
|
00:57:11.650 --> 00:57:13.810 |
|
It's a convex problem where they're the |
|
|
|
00:57:13.810 --> 00:57:15.820 |
|
only local optimum is global optimum. |
|
|
|
00:57:16.960 --> 00:57:18.620 |
|
And so with the suitable optimization |
|
|
|
00:57:18.620 --> 00:57:20.090 |
|
algorithm you should always be able to |
|
|
|
00:57:20.090 --> 00:57:21.540 |
|
find the best solution. |
|
|
|
00:57:22.320 --> 00:57:25.260 |
|
But neural networks, which we'll get to |
|
|
|
00:57:25.260 --> 00:57:28.460 |
|
later, are like really bumpy, and so |
|
|
|
00:57:28.460 --> 00:57:29.870 |
|
the optimization is much harder. |
|
|
|
00:57:33.810 --> 00:57:36.080 |
|
So finally, this is the Pegasus |
|
|
|
00:57:36.080 --> 00:57:38.380 |
|
algorithm for stochastic gradient |
|
|
|
00:57:38.380 --> 00:57:38.920 |
|
descent. |
|
|
|
00:57:39.910 --> 00:57:40.490 |
|
And. |
|
|
|
00:57:41.120 --> 00:57:43.309 |
|
Fortunately, it's kind of it's kind of |
|
|
|
00:57:43.310 --> 00:57:46.490 |
|
short, it's a simple algorithm, but it |
|
|
|
00:57:46.490 --> 00:57:47.790 |
|
takes a little bit of explanation. |
|
|
|
00:57:48.710 --> 00:57:50.200 |
|
Just laughing because my daughter has |
|
|
|
00:57:50.200 --> 00:57:52.720 |
|
this book, fortunately, unfortunately, |
|
|
|
00:57:52.720 --> 00:57:53.360 |
|
where? |
|
|
|
00:57:54.040 --> 00:57:57.710 |
|
Fortunately, unfortunately, the he gets |
|
|
|
00:57:57.710 --> 00:57:58.100 |
|
an airplane. |
|
|
|
00:57:58.100 --> 00:58:00.041 |
|
The engine exploded, fortunately at a |
|
|
|
00:58:00.041 --> 00:58:00.353 |
|
parachute. |
|
|
|
00:58:00.353 --> 00:58:02.552 |
|
Unfortunately there is a hole in the |
|
|
|
00:58:02.552 --> 00:58:02.933 |
|
parachute. |
|
|
|
00:58:02.933 --> 00:58:05.110 |
|
Fortunately there is a haystack below |
|
|
|
00:58:05.110 --> 00:58:05.380 |
|
him. |
|
|
|
00:58:05.380 --> 00:58:07.500 |
|
Unfortunately there is a pitchfork in |
|
|
|
00:58:07.500 --> 00:58:08.080 |
|
haystack. |
|
|
|
00:58:08.080 --> 00:58:09.490 |
|
Just goes on like that for the whole |
|
|
|
00:58:09.490 --> 00:58:10.010 |
|
book. |
|
|
|
00:58:10.990 --> 00:58:12.700 |
|
It's really funny, so fortunately this |
|
|
|
00:58:12.700 --> 00:58:13.420 |
|
is short. |
|
|
|
00:58:13.420 --> 00:58:15.490 |
|
Unfortunately, it still may be hard to |
|
|
|
00:58:15.490 --> 00:58:16.190 |
|
understand. |
|
|
|
00:58:16.990 --> 00:58:18.760 |
|
And so the. |
|
|
|
00:58:18.760 --> 00:58:21.250 |
|
So we have a training set here. |
|
|
|
00:58:21.250 --> 00:58:23.280 |
|
These are the input training examples. |
|
|
|
00:58:23.940 --> 00:58:25.950 |
|
I've got some regularization weight and |
|
|
|
00:58:25.950 --> 00:58:27.380 |
|
I have some number of iterations that |
|
|
|
00:58:27.380 --> 00:58:28.030 |
|
I'm going to do. |
|
|
|
00:58:28.850 --> 00:58:30.370 |
|
And I initialize the weights to be |
|
|
|
00:58:30.370 --> 00:58:31.120 |
|
zeros. |
|
|
|
00:58:31.120 --> 00:58:32.630 |
|
These are the weights in my model. |
|
|
|
00:58:33.290 --> 00:58:35.220 |
|
And then I step through each iteration. |
|
|
|
00:58:36.070 --> 00:58:38.270 |
|
And I choose some sample. |
|
|
|
00:58:39.280 --> 00:58:41.140 |
|
Uniformly at random, so I just choose |
|
|
|
00:58:41.140 --> 00:58:43.170 |
|
one single training sample from my data |
|
|
|
00:58:43.170 --> 00:58:43.480 |
|
set. |
|
|
|
00:58:44.310 --> 00:58:48.440 |
|
And then I set my learning rate which |
|
|
|
00:58:48.440 --> 00:58:49.100 |
|
is. |
|
|
|
00:58:49.180 --> 00:58:49.790 |
|
|
|
|
|
00:58:52.030 --> 00:58:54.220 |
|
Or I should say, I guess that's it. |
|
|
|
00:58:54.220 --> 00:58:55.720 |
|
So I choose some samples from my data |
|
|
|
00:58:55.720 --> 00:58:56.220 |
|
set. |
|
|
|
00:58:56.220 --> 00:58:57.840 |
|
Then I set my learning rate which is |
|
|
|
00:58:57.840 --> 00:59:00.520 |
|
one over Lambda T so basically my step |
|
|
|
00:59:00.520 --> 00:59:02.945 |
|
size is going to get smaller the more |
|
|
|
00:59:02.945 --> 00:59:04.200 |
|
samples that I process. |
|
|
|
00:59:06.200 --> 00:59:10.200 |
|
And if my margin is less than one, that |
|
|
|
00:59:10.200 --> 00:59:12.330 |
|
means that I'm not happy with my score |
|
|
|
00:59:12.330 --> 00:59:13.330 |
|
for that example. |
|
|
|
00:59:14.120 --> 00:59:16.990 |
|
So I increment my weights by 1 minus |
|
|
|
00:59:16.990 --> 00:59:20.828 |
|
ETA Lambda W so this is the. |
|
|
|
00:59:20.828 --> 00:59:22.833 |
|
This part is just saying that I want my |
|
|
|
00:59:22.833 --> 00:59:24.160 |
|
weights to get smaller in general |
|
|
|
00:59:24.160 --> 00:59:25.760 |
|
because I'm trying to minimize the |
|
|
|
00:59:25.760 --> 00:59:27.760 |
|
squared weights and that's based on the |
|
|
|
00:59:27.760 --> 00:59:29.570 |
|
derivative of W transpose W. |
|
|
|
00:59:30.480 --> 00:59:32.370 |
|
And then this part is saying I also |
|
|
|
00:59:32.370 --> 00:59:34.180 |
|
want to improve my score for this |
|
|
|
00:59:34.180 --> 00:59:36.110 |
|
example, so I add. |
|
|
|
00:59:37.400 --> 00:59:44.440 |
|
I add ETA YX so if X is positive then |
|
|
|
00:59:44.440 --> 00:59:46.712 |
|
I'm going to increase and Y is |
|
|
|
00:59:46.712 --> 00:59:48.340 |
|
positive, then I'm going to increase |
|
|
|
00:59:48.340 --> 00:59:50.790 |
|
the weight so that it becomes so that X |
|
|
|
00:59:50.790 --> 00:59:51.920 |
|
becomes more positive. |
|
|
|
00:59:52.550 --> 00:59:54.970 |
|
Is positive and Y is negative, then I'm |
|
|
|
00:59:54.970 --> 00:59:57.438 |
|
going to decrease the weight so that so |
|
|
|
00:59:57.438 --> 00:59:59.634 |
|
that X becomes less positive, more |
|
|
|
00:59:59.634 --> 01:00:00.940 |
|
negative and more correct. |
|
|
|
01:00:02.430 --> 01:00:04.410 |
|
And then if I'm happy with my score of |
|
|
|
01:00:04.410 --> 01:00:06.830 |
|
the example, it's outside the margin YW |
|
|
|
01:00:06.830 --> 01:00:07.750 |
|
transpose X. |
|
|
|
01:00:08.950 --> 01:00:12.040 |
|
Is greater or equal to 1, then I only |
|
|
|
01:00:12.040 --> 01:00:13.750 |
|
care about this regularization term, so |
|
|
|
01:00:13.750 --> 01:00:15.010 |
|
I'm just going to make the weight a |
|
|
|
01:00:15.010 --> 01:00:17.100 |
|
little bit smaller because I'm trying |
|
|
|
01:00:17.100 --> 01:00:18.590 |
|
to again minimize the square of the |
|
|
|
01:00:18.590 --> 01:00:18.850 |
|
weights. |
|
|
|
01:00:20.220 --> 01:00:21.500 |
|
So I just that's it. |
|
|
|
01:00:21.500 --> 01:00:23.145 |
|
I just stepped through all the |
|
|
|
01:00:23.145 --> 01:00:23.420 |
|
examples. |
|
|
|
01:00:23.420 --> 01:00:25.615 |
|
It's like a pretty short optimization. |
|
|
|
01:00:25.615 --> 01:00:27.750 |
|
And what I'm doing is I'm just like |
|
|
|
01:00:27.750 --> 01:00:30.530 |
|
incrementally trying to improve my |
|
|
|
01:00:30.530 --> 01:00:32.479 |
|
solution for each example that I |
|
|
|
01:00:32.480 --> 01:00:33.490 |
|
encounter. |
|
|
|
01:00:33.490 --> 01:00:37.459 |
|
And what's not intuitive maybe is that |
|
|
|
01:00:37.460 --> 01:00:38.810 |
|
theoretically you can show that this |
|
|
|
01:00:38.810 --> 01:00:42.970 |
|
eventually improves gives you the best |
|
|
|
01:00:42.970 --> 01:00:44.860 |
|
possible weights for all your examples. |
|
|
|
01:00:47.930 --> 01:00:49.640 |
|
There's a there's another version of |
|
|
|
01:00:49.640 --> 01:00:52.180 |
|
this where you use what's called a mini |
|
|
|
01:00:52.180 --> 01:00:52.770 |
|
batch. |
|
|
|
01:00:53.580 --> 01:00:55.290 |
|
We're just instead of sampling. |
|
|
|
01:00:55.290 --> 01:00:57.165 |
|
Instead of taking one sample at a time, |
|
|
|
01:00:57.165 --> 01:00:59.165 |
|
one training sample at a time, you take |
|
|
|
01:00:59.165 --> 01:01:01.280 |
|
a whole set at a time of random set of |
|
|
|
01:01:01.280 --> 01:01:01.930 |
|
examples. |
|
|
|
01:01:03.000 --> 01:01:06.970 |
|
And then you take instead of instead of |
|
|
|
01:01:06.970 --> 01:01:09.660 |
|
this term involving like the margin |
|
|
|
01:01:09.660 --> 01:01:13.570 |
|
loss of one example involves the |
|
|
|
01:01:13.570 --> 01:01:16.564 |
|
average of those losses for all the |
|
|
|
01:01:16.564 --> 01:01:17.999 |
|
examples that violate the margin. |
|
|
|
01:01:18.000 --> 01:01:23.340 |
|
So you're taking the average of YXI |
|
|
|
01:01:23.340 --> 01:01:24.750 |
|
where these are the examples in your |
|
|
|
01:01:24.750 --> 01:01:26.530 |
|
mini batch that violate the margin. |
|
|
|
01:01:27.200 --> 01:01:29.270 |
|
And multiplying by ETA and adding it to |
|
|
|
01:01:29.270 --> 01:01:29.640 |
|
West. |
|
|
|
01:01:30.740 --> 01:01:32.470 |
|
So if your batch size is 1, it's the |
|
|
|
01:01:32.470 --> 01:01:34.900 |
|
exact same algorithm as before, but by |
|
|
|
01:01:34.900 --> 01:01:36.600 |
|
averaging your gradient over multiple |
|
|
|
01:01:36.600 --> 01:01:38.220 |
|
examples you get a more stable |
|
|
|
01:01:38.220 --> 01:01:39.230 |
|
optimization. |
|
|
|
01:01:39.230 --> 01:01:41.250 |
|
And it can also be faster if you're |
|
|
|
01:01:41.250 --> 01:01:44.800 |
|
able to parallelize your algorithm like |
|
|
|
01:01:44.800 --> 01:01:47.470 |
|
you can with multiple GPUs, I mean CPUs |
|
|
|
01:01:47.470 --> 01:01:48.120 |
|
or GPU. |
|
|
|
01:01:52.450 --> 01:01:53.580 |
|
Any questions about that? |
|
|
|
01:01:55.250 --> 01:01:55.480 |
|
Yeah. |
|
|
|
01:01:56.770 --> 01:01:57.310 |
|
When it comes to. |
|
|
|
01:01:58.820 --> 01:02:01.350 |
|
Divide the regular regularization |
|
|
|
01:02:01.350 --> 01:02:02.740 |
|
constant by the mini batch. |
|
|
|
01:02:04.020 --> 01:02:05.420 |
|
An. |
|
|
|
01:02:05.770 --> 01:02:07.330 |
|
Just into when you're updating the |
|
|
|
01:02:07.330 --> 01:02:07.680 |
|
weights. |
|
|
|
01:02:10.790 --> 01:02:12.510 |
|
The average of that badge is not just |
|
|
|
01:02:12.510 --> 01:02:15.110 |
|
like stochastic versus 1, right? |
|
|
|
01:02:15.110 --> 01:02:17.145 |
|
So are you saying should you be taking |
|
|
|
01:02:17.145 --> 01:02:19.781 |
|
like a bigger, are you saying should |
|
|
|
01:02:19.781 --> 01:02:21.789 |
|
you change like how much weight you |
|
|
|
01:02:21.790 --> 01:02:25.350 |
|
assign to this guy where you're trying |
|
|
|
01:02:25.350 --> 01:02:26.350 |
|
to reduce the weight? |
|
|
|
01:02:28.150 --> 01:02:30.930 |
|
Divided by the batch size by bad. |
|
|
|
01:02:32.390 --> 01:02:32.960 |
|
This update. |
|
|
|
01:02:34.290 --> 01:02:36.460 |
|
After 10 and then so you divide it by |
|
|
|
01:02:36.460 --> 01:02:36.970 |
|
10. |
|
|
|
01:02:36.970 --> 01:02:37.370 |
|
OK. |
|
|
|
01:02:38.230 --> 01:02:39.950 |
|
You could do that. |
|
|
|
01:02:39.950 --> 01:02:41.090 |
|
I mean this also. |
|
|
|
01:02:41.090 --> 01:02:42.813 |
|
You don't have to have a 1 / K here, |
|
|
|
01:02:42.813 --> 01:02:44.540 |
|
this could be just the sum. |
|
|
|
01:02:44.540 --> 01:02:47.270 |
|
So here they averaged out the |
|
|
|
01:02:47.270 --> 01:02:48.240 |
|
gradients. |
|
|
|
01:02:48.300 --> 01:02:48.930 |
|
And. |
|
|
|
01:02:49.910 --> 01:02:53.605 |
|
And also like sometimes, depending on |
|
|
|
01:02:53.605 --> 01:02:56.210 |
|
your batch size, your ideal learning |
|
|
|
01:02:56.210 --> 01:02:58.040 |
|
rate and other regularizations can |
|
|
|
01:02:58.040 --> 01:02:58.970 |
|
sometimes change. |
|
|
|
01:03:03.220 --> 01:03:07.570 |
|
So we saw SGD stochastic gradient |
|
|
|
01:03:07.570 --> 01:03:10.420 |
|
descent for the hinge loss with, which |
|
|
|
01:03:10.420 --> 01:03:11.740 |
|
is what the SVM uses. |
|
|
|
01:03:13.340 --> 01:03:15.110 |
|
It's nice for the hinge loss because |
|
|
|
01:03:15.110 --> 01:03:17.155 |
|
there's no gradient for incorrect or |
|
|
|
01:03:17.155 --> 01:03:19.020 |
|
for confidently correct examples, so |
|
|
|
01:03:19.020 --> 01:03:21.280 |
|
you only have to optimize over the ones |
|
|
|
01:03:21.280 --> 01:03:22.310 |
|
that are within the margin. |
|
|
|
01:03:24.320 --> 01:03:27.270 |
|
But you can also compute the gradients |
|
|
|
01:03:27.270 --> 01:03:29.265 |
|
for all these other kinds of losses, |
|
|
|
01:03:29.265 --> 01:03:30.830 |
|
like whoops, like the logistic |
|
|
|
01:03:30.830 --> 01:03:32.810 |
|
regression loss or sigmoid loss. |
|
|
|
01:03:35.540 --> 01:03:37.620 |
|
Another logistic loss, another kind of |
|
|
|
01:03:37.620 --> 01:03:39.260 |
|
margin loss. |
|
|
|
01:03:39.260 --> 01:03:40.730 |
|
These are not things that you should |
|
|
|
01:03:40.730 --> 01:03:41.400 |
|
ever memorize. |
|
|
|
01:03:41.400 --> 01:03:42.570 |
|
Or you can memorize them. |
|
|
|
01:03:42.570 --> 01:03:44.470 |
|
I won't hold it against you, but. |
|
|
|
01:03:45.510 --> 01:03:46.850 |
|
But you can always look them up, so |
|
|
|
01:03:46.850 --> 01:03:47.820 |
|
they're not things you need to |
|
|
|
01:03:47.820 --> 01:03:48.160 |
|
memorize. |
|
|
|
01:03:50.430 --> 01:03:53.380 |
|
I will never ask you like what is the? |
|
|
|
01:03:53.380 --> 01:03:55.270 |
|
I won't ask you like what's the |
|
|
|
01:03:55.270 --> 01:03:56.600 |
|
gradient of some function. |
|
|
|
01:03:58.090 --> 01:03:58.750 |
|
And. |
|
|
|
01:03:59.660 --> 01:04:02.980 |
|
So this is just comparing like the |
|
|
|
01:04:02.980 --> 01:04:05.930 |
|
optimization speed of the of this |
|
|
|
01:04:05.930 --> 01:04:08.160 |
|
approach, Pegasus versus other |
|
|
|
01:04:08.160 --> 01:04:08.900 |
|
optimizers. |
|
|
|
01:04:10.000 --> 01:04:14.040 |
|
So for example, here's Pegasus. |
|
|
|
01:04:14.040 --> 01:04:17.680 |
|
It goes like this is time on the X axis |
|
|
|
01:04:17.680 --> 01:04:18.493 |
|
in seconds. |
|
|
|
01:04:18.493 --> 01:04:20.920 |
|
So basically you want to get low |
|
|
|
01:04:20.920 --> 01:04:22.300 |
|
because this is the objective that |
|
|
|
01:04:22.300 --> 01:04:23.670 |
|
you're trying to minimize. |
|
|
|
01:04:23.670 --> 01:04:25.900 |
|
So basically Pegasus shoots down to |
|
|
|
01:04:25.900 --> 01:04:28.210 |
|
zero and like milliseconds and these |
|
|
|
01:04:28.210 --> 01:04:29.980 |
|
other things are like still chugging |
|
|
|
01:04:29.980 --> 01:04:31.940 |
|
away like many seconds later. |
|
|
|
01:04:33.020 --> 01:04:33.730 |
|
And. |
|
|
|
01:04:34.530 --> 01:04:37.500 |
|
And so consistently if you compare |
|
|
|
01:04:37.500 --> 01:04:40.500 |
|
Pegasus to SVM perf, which is like |
|
|
|
01:04:40.500 --> 01:04:41.920 |
|
stands for performance. |
|
|
|
01:04:41.920 --> 01:04:45.050 |
|
It was a highly optimized SVM library. |
|
|
|
01:04:45.940 --> 01:04:49.230 |
|
Or LA SVM, which I forget what that |
|
|
|
01:04:49.230 --> 01:04:50.030 |
|
stands for right now. |
|
|
|
01:04:50.740 --> 01:04:53.140 |
|
But two different SVM optimizers. |
|
|
|
01:04:53.140 --> 01:04:56.056 |
|
Pegasus is just way faster you reach |
|
|
|
01:04:56.056 --> 01:04:59.710 |
|
the you reach the ideal solution really |
|
|
|
01:04:59.710 --> 01:05:00.690 |
|
really fast. |
|
|
|
01:05:02.020 --> 01:05:04.290 |
|
The other one that performs just as |
|
|
|
01:05:04.290 --> 01:05:06.280 |
|
well, if not better. |
|
|
|
01:05:06.280 --> 01:05:09.180 |
|
Sdca is also a stochastic gradient |
|
|
|
01:05:09.180 --> 01:05:13.470 |
|
descent method that just also chooses |
|
|
|
01:05:13.470 --> 01:05:15.160 |
|
the learning rate dynamically instead |
|
|
|
01:05:15.160 --> 01:05:16.738 |
|
of following a single schedule. |
|
|
|
01:05:16.738 --> 01:05:19.080 |
|
The learning rate is the step size. |
|
|
|
01:05:19.080 --> 01:05:20.460 |
|
It's like how much you move in the |
|
|
|
01:05:20.460 --> 01:05:21.290 |
|
gradient direction. |
|
|
|
01:05:24.240 --> 01:05:26.340 |
|
And then in terms of the error, |
|
|
|
01:05:26.340 --> 01:05:28.440 |
|
training time and error, so it's so |
|
|
|
01:05:28.440 --> 01:05:30.590 |
|
Pegasus is taking like under a second |
|
|
|
01:05:30.590 --> 01:05:32.710 |
|
for all these different problems where |
|
|
|
01:05:32.710 --> 01:05:34.390 |
|
some other libraries could take even |
|
|
|
01:05:34.390 --> 01:05:35.380 |
|
hundreds of seconds. |
|
|
|
01:05:36.290 --> 01:05:39.620 |
|
And it achieves just as good, if not |
|
|
|
01:05:39.620 --> 01:05:42.120 |
|
better, error than most of them. |
|
|
|
01:05:43.000 --> 01:05:44.800 |
|
And in part that's just like even |
|
|
|
01:05:44.800 --> 01:05:46.090 |
|
though it's a global objective |
|
|
|
01:05:46.090 --> 01:05:47.520 |
|
function, you have to like choose your |
|
|
|
01:05:47.520 --> 01:05:50.120 |
|
regularization parameters and other |
|
|
|
01:05:50.120 --> 01:05:50.720 |
|
parameters. |
|
|
|
01:05:51.460 --> 01:05:53.490 |
|
And you have to. |
|
|
|
01:05:53.860 --> 01:05:56.930 |
|
It may be hard to tell when you |
|
|
|
01:05:56.930 --> 01:05:58.560 |
|
converge exactly, so you can get small |
|
|
|
01:05:58.560 --> 01:06:00.180 |
|
differences between different |
|
|
|
01:06:00.180 --> 01:06:00.800 |
|
algorithms. |
|
|
|
01:06:04.300 --> 01:06:05.630 |
|
And then they also did. |
|
|
|
01:06:05.630 --> 01:06:07.590 |
|
There's a kernelized version which |
|
|
|
01:06:07.590 --> 01:06:07.873 |
|
won't. |
|
|
|
01:06:07.873 --> 01:06:09.560 |
|
I won't go into, but it's the same |
|
|
|
01:06:09.560 --> 01:06:10.350 |
|
principle. |
|
|
|
01:06:10.770 --> 01:06:15.190 |
|
And so they're able to get. |
|
|
|
01:06:15.300 --> 01:06:15.940 |
|
|
|
|
|
01:06:18.170 --> 01:06:20.520 |
|
They're able to use the kernelized |
|
|
|
01:06:20.520 --> 01:06:21.960 |
|
version to get really good performance. |
|
|
|
01:06:21.960 --> 01:06:24.470 |
|
So on MNIST for example, which was your |
|
|
|
01:06:24.470 --> 01:06:29.010 |
|
homework one, they get 6% accuracy, 6% |
|
|
|
01:06:29.010 --> 01:06:32.595 |
|
error rate using a kernelized SVM with |
|
|
|
01:06:32.595 --> 01:06:33.460 |
|
a Gaussian kernel. |
|
|
|
01:06:34.070 --> 01:06:35.330 |
|
So it's essentially just like a |
|
|
|
01:06:35.330 --> 01:06:37.070 |
|
slightly smarter nearest neighbor |
|
|
|
01:06:37.070 --> 01:06:37.850 |
|
algorithm. |
|
|
|
01:06:40.840 --> 01:06:42.910 |
|
And the thing that's notable? |
|
|
|
01:06:42.910 --> 01:06:44.210 |
|
Actually this takes. |
|
|
|
01:06:48.920 --> 01:06:49.513 |
|
Kind of interesting. |
|
|
|
01:06:49.513 --> 01:06:51.129 |
|
So it's not so fast. |
|
|
|
01:06:51.130 --> 01:06:51.350 |
|
Sorry. |
|
|
|
01:06:51.350 --> 01:06:52.590 |
|
It's just looking at the times. |
|
|
|
01:06:52.590 --> 01:06:54.330 |
|
Yeah, so it's not so fast in the |
|
|
|
01:06:54.330 --> 01:06:55.390 |
|
kernelized version, I guess. |
|
|
|
01:06:55.390 --> 01:06:56.080 |
|
But it still works. |
|
|
|
01:06:56.080 --> 01:06:57.650 |
|
I didn't look into that in depth, so |
|
|
|
01:06:57.650 --> 01:06:57.965 |
|
I'm not. |
|
|
|
01:06:57.965 --> 01:06:58.770 |
|
I can't explain it. |
|
|
|
01:07:01.980 --> 01:07:02.290 |
|
Alright. |
|
|
|
01:07:02.290 --> 01:07:04.050 |
|
And then finally like one other thing |
|
|
|
01:07:04.050 --> 01:07:05.820 |
|
that they look at is the mini batch |
|
|
|
01:07:05.820 --> 01:07:06.270 |
|
size. |
|
|
|
01:07:06.270 --> 01:07:08.810 |
|
So if you as you like sample chunks of |
|
|
|
01:07:08.810 --> 01:07:10.420 |
|
data and do the optimization with |
|
|
|
01:07:10.420 --> 01:07:11.659 |
|
respect to each chunk of data. |
|
|
|
01:07:12.730 --> 01:07:13.680 |
|
If you. |
|
|
|
01:07:13.770 --> 01:07:14.430 |
|
|
|
|
|
01:07:15.530 --> 01:07:18.520 |
|
This is looking at the. |
|
|
|
01:07:19.780 --> 01:07:22.780 |
|
At how close do you get to the ideal |
|
|
|
01:07:22.780 --> 01:07:23.450 |
|
solution? |
|
|
|
01:07:24.540 --> 01:07:26.830 |
|
And this is the mini batch size. |
|
|
|
01:07:26.830 --> 01:07:28.860 |
|
So for a pretty big range of mini batch |
|
|
|
01:07:28.860 --> 01:07:31.295 |
|
sizes you can get like very close to |
|
|
|
01:07:31.295 --> 01:07:32.330 |
|
the ideal solution. |
|
|
|
01:07:33.720 --> 01:07:36.570 |
|
So this is making an approximation |
|
|
|
01:07:36.570 --> 01:07:39.660 |
|
because your every step you're choosing |
|
|
|
01:07:39.660 --> 01:07:41.500 |
|
your step based on a subset of the |
|
|
|
01:07:41.500 --> 01:07:41.860 |
|
data. |
|
|
|
01:07:42.860 --> 01:07:47.530 |
|
But for like a big range of conditions, |
|
|
|
01:07:47.530 --> 01:07:49.960 |
|
it gives you an ideal solution. |
|
|
|
01:07:50.700 --> 01:07:53.459 |
|
And these are these are after different |
|
|
|
01:07:53.460 --> 01:07:55.635 |
|
step length after different numbers of |
|
|
|
01:07:55.635 --> 01:07:56.000 |
|
iterations. |
|
|
|
01:07:56.000 --> 01:07:58.533 |
|
So if you do 4K iterations, you're at |
|
|
|
01:07:58.533 --> 01:08:00.542 |
|
the Black line, 16 K iterations you're |
|
|
|
01:08:00.542 --> 01:08:03.083 |
|
at the blue, and 64K iterations you're |
|
|
|
01:08:03.083 --> 01:08:04.050 |
|
at the red. |
|
|
|
01:08:05.030 --> 01:08:06.680 |
|
And yeah. |
|
|
|
01:08:11.670 --> 01:08:13.200 |
|
And then they also did an experiment |
|
|
|
01:08:13.200 --> 01:08:15.300 |
|
showing, like in their original paper, |
|
|
|
01:08:15.300 --> 01:08:17.120 |
|
you would randomly sample with |
|
|
|
01:08:17.120 --> 01:08:18.410 |
|
replacement the data. |
|
|
|
01:08:18.410 --> 01:08:20.420 |
|
But if you randomly sample, if you just |
|
|
|
01:08:20.420 --> 01:08:22.100 |
|
shuffle your data, essentially for |
|
|
|
01:08:22.100 --> 01:08:23.750 |
|
what's called a epoch, which is like |
|
|
|
01:08:23.750 --> 01:08:25.780 |
|
one cycle through the data, then you do |
|
|
|
01:08:25.780 --> 01:08:26.250 |
|
better. |
|
|
|
01:08:26.250 --> 01:08:28.680 |
|
So that's All in all optimization |
|
|
|
01:08:28.680 --> 01:08:30.367 |
|
algorithms that I see now, you |
|
|
|
01:08:30.367 --> 01:08:32.386 |
|
essentially shuffle the data, iterate |
|
|
|
01:08:32.386 --> 01:08:35.440 |
|
through all the data and then reshuffle |
|
|
|
01:08:35.440 --> 01:08:37.360 |
|
it and iterate again and each of those |
|
|
|
01:08:37.360 --> 01:08:37.920 |
|
iterations. |
|
|
|
01:08:37.970 --> 01:08:39.490 |
|
To the data is called at epoch. |
|
|
|
01:08:41.110 --> 01:08:41.750 |
|
Epic. |
|
|
|
01:08:41.750 --> 01:08:42.760 |
|
I never know how to pronounce it. |
|
|
|
01:08:44.260 --> 01:08:46.440 |
|
And then they also just showed like |
|
|
|
01:08:46.440 --> 01:08:48.363 |
|
their learning rate schedule seems to |
|
|
|
01:08:48.363 --> 01:08:50.150 |
|
like provide much more stable results |
|
|
|
01:08:50.150 --> 01:08:51.750 |
|
compared to a previous approach that |
|
|
|
01:08:51.750 --> 01:08:53.540 |
|
would use a fixed learning rate for all |
|
|
|
01:08:53.540 --> 01:08:55.200 |
|
the for all the iterations. |
|
|
|
01:08:58.610 --> 01:09:02.230 |
|
So, takeaways and surprising facts |
|
|
|
01:09:02.230 --> 01:09:03.190 |
|
about Pegasus. |
|
|
|
01:09:04.460 --> 01:09:08.480 |
|
So it's using this SGD, which could be |
|
|
|
01:09:08.480 --> 01:09:11.730 |
|
an acronym for sub gradient descent or |
|
|
|
01:09:11.730 --> 01:09:13.560 |
|
stochastic gradient descent, and it |
|
|
|
01:09:13.560 --> 01:09:14.780 |
|
applies both ways here. |
|
|
|
01:09:15.580 --> 01:09:16.585 |
|
It's very simple. |
|
|
|
01:09:16.585 --> 01:09:18.160 |
|
It's an effective optimization |
|
|
|
01:09:18.160 --> 01:09:18.675 |
|
algorithm. |
|
|
|
01:09:18.675 --> 01:09:20.830 |
|
It's probably the most widely used |
|
|
|
01:09:20.830 --> 01:09:22.640 |
|
optimization algorithm in machine |
|
|
|
01:09:22.640 --> 01:09:22.990 |
|
learning. |
|
|
|
01:09:24.330 --> 01:09:26.230 |
|
There's very many variants of it, so |
|
|
|
01:09:26.230 --> 01:09:28.590 |
|
I'll talk about some like atom in a |
|
|
|
01:09:28.590 --> 01:09:30.990 |
|
couple classes, but the idea is that |
|
|
|
01:09:30.990 --> 01:09:32.540 |
|
you just step towards a better solution |
|
|
|
01:09:32.540 --> 01:09:34.380 |
|
of your parameters based on a small |
|
|
|
01:09:34.380 --> 01:09:35.830 |
|
sample of the training data |
|
|
|
01:09:35.830 --> 01:09:36.550 |
|
iteratively. |
|
|
|
01:09:37.490 --> 01:09:39.370 |
|
It's not very sensitive that mini batch |
|
|
|
01:09:39.370 --> 01:09:39.940 |
|
size. |
|
|
|
01:09:40.990 --> 01:09:43.140 |
|
With larger batches you get like more |
|
|
|
01:09:43.140 --> 01:09:44.720 |
|
stable estimates to the gradient and it |
|
|
|
01:09:44.720 --> 01:09:46.560 |
|
can be a lot faster if you're doing GPU |
|
|
|
01:09:46.560 --> 01:09:47.430 |
|
processing. |
|
|
|
01:09:47.430 --> 01:09:50.860 |
|
So in machine learning and like large |
|
|
|
01:09:50.860 --> 01:09:53.470 |
|
scale machine learning, deep learning. |
|
|
|
01:09:54.150 --> 01:09:56.790 |
|
You tend to prefer large batches up to |
|
|
|
01:09:56.790 --> 01:09:58.520 |
|
what you're GPU memory can hold. |
|
|
|
01:09:59.680 --> 01:10:01.620 |
|
The same learning schedule is effective |
|
|
|
01:10:01.620 --> 01:10:04.120 |
|
across many problems, so they're like |
|
|
|
01:10:04.120 --> 01:10:05.865 |
|
decreasing the learning rate gradually |
|
|
|
01:10:05.865 --> 01:10:08.610 |
|
is just like generally a good way to |
|
|
|
01:10:08.610 --> 01:10:08.900 |
|
go. |
|
|
|
01:10:08.900 --> 01:10:10.780 |
|
It doesn't require a lot of tuning. |
|
|
|
01:10:12.550 --> 01:10:15.070 |
|
And the thing, so I don't know if it's |
|
|
|
01:10:15.070 --> 01:10:17.350 |
|
in this paper, but this I forgot to |
|
|
|
01:10:17.350 --> 01:10:18.880 |
|
mention, this work was done at TTI |
|
|
|
01:10:18.880 --> 01:10:21.345 |
|
Chicago, so just very new here. |
|
|
|
01:10:21.345 --> 01:10:23.890 |
|
So one of the first talks they give was |
|
|
|
01:10:23.890 --> 01:10:25.540 |
|
for our group at UIUC. |
|
|
|
01:10:25.540 --> 01:10:27.190 |
|
So I remember I remember them talking |
|
|
|
01:10:27.190 --> 01:10:27.490 |
|
about it. |
|
|
|
01:10:28.360 --> 01:10:29.810 |
|
And one of the things that's kind of a |
|
|
|
01:10:29.810 --> 01:10:30.820 |
|
surprising result. |
|
|
|
01:10:31.650 --> 01:10:35.390 |
|
Is that with this algorithm it's faster |
|
|
|
01:10:35.390 --> 01:10:37.880 |
|
to train using a larger training set, |
|
|
|
01:10:37.880 --> 01:10:40.180 |
|
so that's not super intuitive, right? |
|
|
|
01:10:41.370 --> 01:10:42.710 |
|
In order to get the same test |
|
|
|
01:10:42.710 --> 01:10:43.430 |
|
performance. |
|
|
|
01:10:43.430 --> 01:10:46.990 |
|
And the reason is like if you think |
|
|
|
01:10:46.990 --> 01:10:49.780 |
|
about like a little bit of data, if you |
|
|
|
01:10:49.780 --> 01:10:51.970 |
|
have a little bit of data, then you |
|
|
|
01:10:51.970 --> 01:10:53.540 |
|
have to like keep on iterating over |
|
|
|
01:10:53.540 --> 01:10:55.450 |
|
that same little bit of data and each |
|
|
|
01:10:55.450 --> 01:10:57.010 |
|
time you iterate over it, you're just |
|
|
|
01:10:57.010 --> 01:10:58.330 |
|
like learning a little bit new. |
|
|
|
01:10:58.330 --> 01:10:59.660 |
|
It's like trying to keep on like |
|
|
|
01:10:59.660 --> 01:11:00.999 |
|
squeezing the same water out of a |
|
|
|
01:11:01.000 --> 01:11:01.510 |
|
sponge. |
|
|
|
01:11:02.560 --> 01:11:04.557 |
|
But if you have a lot of data and |
|
|
|
01:11:04.557 --> 01:11:06.270 |
|
you're cycling through this big thing |
|
|
|
01:11:06.270 --> 01:11:08.030 |
|
of data, you keep on seeing new things |
|
|
|
01:11:08.030 --> 01:11:10.125 |
|
as you as you go through the data. |
|
|
|
01:11:10.125 --> 01:11:12.290 |
|
And so you're learning more, like |
|
|
|
01:11:12.290 --> 01:11:14.050 |
|
learning more per time. |
|
|
|
01:11:14.690 --> 01:11:17.719 |
|
So if you have a million examples then, |
|
|
|
01:11:17.719 --> 01:11:20.520 |
|
and you do like a million steps with |
|
|
|
01:11:20.520 --> 01:11:22.220 |
|
one example each, then you learn a lot |
|
|
|
01:11:22.220 --> 01:11:22.930 |
|
new. |
|
|
|
01:11:22.930 --> 01:11:25.257 |
|
But if you have 10 examples and you do |
|
|
|
01:11:25.257 --> 01:11:26.829 |
|
a million steps, million steps, then |
|
|
|
01:11:26.830 --> 01:11:28.955 |
|
you've just seen there's 10 examples |
|
|
|
01:11:28.955 --> 01:11:29.830 |
|
10,000 times. |
|
|
|
01:11:30.660 --> 01:11:32.410 |
|
Or something 100,000 times. |
|
|
|
01:11:32.410 --> 01:11:36.630 |
|
So if you get a larger training set, |
|
|
|
01:11:36.630 --> 01:11:38.440 |
|
you actually get faster. |
|
|
|
01:11:38.440 --> 01:11:40.230 |
|
It's faster to get the same test |
|
|
|
01:11:40.230 --> 01:11:41.840 |
|
performance. |
|
|
|
01:11:41.840 --> 01:11:44.020 |
|
And where that comes into play is that |
|
|
|
01:11:44.020 --> 01:11:45.978 |
|
sometimes I'll have somebody say like, |
|
|
|
01:11:45.978 --> 01:11:48.355 |
|
I don't like, I don't want to, I don't |
|
|
|
01:11:48.355 --> 01:11:49.939 |
|
want to get more training examples |
|
|
|
01:11:49.940 --> 01:11:51.700 |
|
because my optimization will take too |
|
|
|
01:11:51.700 --> 01:11:52.390 |
|
long. |
|
|
|
01:11:52.390 --> 01:11:54.650 |
|
But actually your optimization will be |
|
|
|
01:11:54.650 --> 01:11:56.116 |
|
faster if you have more training |
|
|
|
01:11:56.116 --> 01:11:57.500 |
|
examples, if you're using this kind of |
|
|
|
01:11:57.500 --> 01:11:59.090 |
|
approach, if what you're trying to do |
|
|
|
01:11:59.090 --> 01:12:01.490 |
|
is maximize your performance. |
|
|
|
01:12:01.550 --> 01:12:02.780 |
|
Which is pretty much what you're always |
|
|
|
01:12:02.780 --> 01:12:03.160 |
|
trying to do. |
|
|
|
01:12:04.090 --> 01:12:06.810 |
|
So larger training set means faster |
|
|
|
01:12:06.810 --> 01:12:07.920 |
|
runtime for training. |
|
|
|
01:12:10.280 --> 01:12:14.330 |
|
So that's all about SVMS and SGDS. |
|
|
|
01:12:14.330 --> 01:12:16.640 |
|
I know that's a lot to take in, but |
|
|
|
01:12:16.640 --> 01:12:18.270 |
|
thank you for being patient and |
|
|
|
01:12:18.270 --> 01:12:18.590 |
|
listening. |
|
|
|
01:12:19.300 --> 01:12:21.480 |
|
And next week I'm going to start |
|
|
|
01:12:21.480 --> 01:12:22.440 |
|
talking about neural networks. |
|
|
|
01:12:22.440 --> 01:12:23.990 |
|
So I'll talk about multilayer |
|
|
|
01:12:23.990 --> 01:12:26.450 |
|
perceptrons and then some concepts and |
|
|
|
01:12:26.450 --> 01:12:28.120 |
|
deep networks. |
|
|
|
01:12:28.120 --> 01:12:28.800 |
|
Thank you. |
|
|
|
01:12:28.800 --> 01:12:30.040 |
|
Have a good weekend. |
|
|
|
|