|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:57:30.0567947Z by ClassTranscribe |
|
|
|
00:02:02.640 --> 00:02:03.590 |
|
Good morning, everybody. |
|
|
|
00:02:07.770 --> 00:02:08.170 |
|
So. |
|
|
|
00:02:09.360 --> 00:02:12.270 |
|
I lost my HDMI connector so the slides |
|
|
|
00:02:12.270 --> 00:02:14.740 |
|
are a little stretched out but still |
|
|
|
00:02:14.740 --> 00:02:15.230 |
|
visible. |
|
|
|
00:02:15.860 --> 00:02:17.190 |
|
I guess that's what it does with PGA. |
|
|
|
00:02:18.980 --> 00:02:19.570 |
|
All right. |
|
|
|
00:02:19.570 --> 00:02:22.390 |
|
So last class we learned about |
|
|
|
00:02:22.390 --> 00:02:24.700 |
|
Perceptrons and MLPS. |
|
|
|
00:02:25.620 --> 00:02:28.410 |
|
So we talked about how Perceptrons are |
|
|
|
00:02:28.410 --> 00:02:30.340 |
|
linear prediction models and really the |
|
|
|
00:02:30.340 --> 00:02:32.070 |
|
only difference between a Perceptron |
|
|
|
00:02:32.070 --> 00:02:32.760 |
|
and a. |
|
|
|
00:02:33.920 --> 00:02:36.330 |
|
Logistic Regressors that often people |
|
|
|
00:02:36.330 --> 00:02:38.290 |
|
will draw Perceptron in terms of these |
|
|
|
00:02:38.290 --> 00:02:40.210 |
|
inputs and weights and outputs. |
|
|
|
00:02:40.210 --> 00:02:40.450 |
|
So. |
|
|
|
00:02:41.140 --> 00:02:43.110 |
|
Almost more of A-frame of thought than |
|
|
|
00:02:43.110 --> 00:02:44.060 |
|
a different algorithm. |
|
|
|
00:02:45.880 --> 00:02:48.580 |
|
MLP ups are nonlinear prediction |
|
|
|
00:02:48.580 --> 00:02:51.510 |
|
models, so composed of, so they're |
|
|
|
00:02:51.510 --> 00:02:54.080 |
|
basically Perceptron stacked on top of |
|
|
|
00:02:54.080 --> 00:02:54.805 |
|
each other. |
|
|
|
00:02:54.805 --> 00:02:57.240 |
|
So given some inputs, you predict some |
|
|
|
00:02:57.240 --> 00:02:59.030 |
|
intermediate values in the inner |
|
|
|
00:02:59.030 --> 00:02:59.460 |
|
layers. |
|
|
|
00:03:00.160 --> 00:03:01.250 |
|
And then they go through some |
|
|
|
00:03:01.250 --> 00:03:03.830 |
|
nonlinearity like a Sigmoid or ReLU. |
|
|
|
00:03:04.470 --> 00:03:06.970 |
|
And then from those intermediate values |
|
|
|
00:03:06.970 --> 00:03:08.940 |
|
you then predict the next layer of |
|
|
|
00:03:08.940 --> 00:03:10.100 |
|
values or the Output. |
|
|
|
00:03:11.890 --> 00:03:13.780 |
|
And MLP's are multilayer. |
|
|
|
00:03:13.780 --> 00:03:17.090 |
|
Perceptrons can Model more complicated |
|
|
|
00:03:17.090 --> 00:03:18.995 |
|
functions, but they're harder to |
|
|
|
00:03:18.995 --> 00:03:19.400 |
|
optimize. |
|
|
|
00:03:19.400 --> 00:03:21.830 |
|
So while a Perceptron is convex, you |
|
|
|
00:03:21.830 --> 00:03:24.180 |
|
can optimize it kind of perfectly to |
|
|
|
00:03:24.180 --> 00:03:24.880 |
|
some precision. |
|
|
|
00:03:25.860 --> 00:03:27.990 |
|
A MLP is very nonconvex. |
|
|
|
00:03:27.990 --> 00:03:31.280 |
|
The decision if you were to plot the |
|
|
|
00:03:31.280 --> 00:03:34.400 |
|
loss versus the weights, it would be |
|
|
|
00:03:34.400 --> 00:03:35.540 |
|
really bumpy. |
|
|
|
00:03:35.540 --> 00:03:37.448 |
|
There's lots of different local minima |
|
|
|
00:03:37.448 --> 00:03:41.360 |
|
within that within that lost surface, |
|
|
|
00:03:41.360 --> 00:03:43.090 |
|
and that makes it harder to optimize. |
|
|
|
00:03:45.090 --> 00:03:47.204 |
|
The way that you optimize it, the way |
|
|
|
00:03:47.204 --> 00:03:48.640 |
|
that you optimize Perceptrons |
|
|
|
00:03:48.640 --> 00:03:52.210 |
|
classically as well as MLPS, is by a |
|
|
|
00:03:52.210 --> 00:03:54.310 |
|
stochastic gradient descent where you |
|
|
|
00:03:54.310 --> 00:03:56.170 |
|
iterate over batches of data you |
|
|
|
00:03:56.170 --> 00:03:56.590 |
|
compute. |
|
|
|
00:03:57.370 --> 00:03:59.370 |
|
How you could change those weights in |
|
|
|
00:03:59.370 --> 00:04:01.390 |
|
order to reduce the loss a little bit |
|
|
|
00:04:01.390 --> 00:04:03.235 |
|
on that data and then take a step in |
|
|
|
00:04:03.235 --> 00:04:03.850 |
|
that direction? |
|
|
|
00:04:07.300 --> 00:04:08.570 |
|
So there is another. |
|
|
|
00:04:10.050 --> 00:04:10.970 |
|
Sorry, one SEC. |
|
|
|
00:04:10.970 --> 00:04:12.120 |
|
OK, I'll leave it. |
|
|
|
00:04:12.830 --> 00:04:14.370 |
|
Yeah, it's a little hard to see, but |
|
|
|
00:04:14.370 --> 00:04:16.930 |
|
anyway, so there's another application |
|
|
|
00:04:16.930 --> 00:04:19.640 |
|
I want to talk about of MLPS, and this |
|
|
|
00:04:19.640 --> 00:04:21.500 |
|
is actually one of the stretch goals |
|
|
|
00:04:21.500 --> 00:04:23.720 |
|
and the homework, or part of part of |
|
|
|
00:04:23.720 --> 00:04:25.349 |
|
this is a stretch goal in the homework. |
|
|
|
00:04:26.330 --> 00:04:27.020 |
|
So. |
|
|
|
00:04:28.410 --> 00:04:31.000 |
|
So the idea here is to use an MLP. |
|
|
|
00:04:31.770 --> 00:04:35.970 |
|
In order to encode data or images. |
|
|
|
00:04:37.120 --> 00:04:38.690 |
|
So you just have. |
|
|
|
00:04:38.690 --> 00:04:41.140 |
|
The concept is kind of simple. |
|
|
|
00:04:41.140 --> 00:04:43.670 |
|
You have this network, it takes as |
|
|
|
00:04:43.670 --> 00:04:44.690 |
|
input. |
|
|
|
00:04:45.700 --> 00:04:47.550 |
|
Positional features, so this could be |
|
|
|
00:04:47.550 --> 00:04:48.960 |
|
like a pixel position. |
|
|
|
00:04:50.200 --> 00:04:53.110 |
|
And then you have some transform on it, |
|
|
|
00:04:53.110 --> 00:04:54.400 |
|
which I'll talk about in a moment, but |
|
|
|
00:04:54.400 --> 00:04:55.178 |
|
you could just have it. |
|
|
|
00:04:55.178 --> 00:04:57.040 |
|
In the simplest case, the Input is just |
|
|
|
00:04:57.040 --> 00:04:58.310 |
|
two pixel positions. |
|
|
|
00:04:59.280 --> 00:05:01.760 |
|
And then the output is the color the |
|
|
|
00:05:01.760 --> 00:05:04.370 |
|
red, green and blue value of the given |
|
|
|
00:05:04.370 --> 00:05:04.850 |
|
pixel. |
|
|
|
00:05:06.700 --> 00:05:11.380 |
|
And so in this paper is experiments |
|
|
|
00:05:11.380 --> 00:05:14.600 |
|
NERF, which was sort of. |
|
|
|
00:05:14.600 --> 00:05:16.170 |
|
There's another related paper for you |
|
|
|
00:05:16.170 --> 00:05:18.360 |
|
Features, which explains some aspect of |
|
|
|
00:05:18.360 --> 00:05:18.490 |
|
it. |
|
|
|
00:05:19.540 --> 00:05:21.350 |
|
They just have LL two loss. |
|
|
|
00:05:21.350 --> 00:05:23.020 |
|
So you want to you have at the end of |
|
|
|
00:05:23.020 --> 00:05:26.610 |
|
this sum Sigmoid that maps maps values |
|
|
|
00:05:26.610 --> 00:05:28.190 |
|
into zeros and ones, and then you have |
|
|
|
00:05:28.190 --> 00:05:31.062 |
|
an L2 loss on what was the color that |
|
|
|
00:05:31.062 --> 00:05:33.063 |
|
you predicted versus the true color of |
|
|
|
00:05:33.063 --> 00:05:33.720 |
|
the pixel. |
|
|
|
00:05:34.570 --> 00:05:36.870 |
|
And based on that you can like compress |
|
|
|
00:05:36.870 --> 00:05:38.460 |
|
an image, you can encode an image in |
|
|
|
00:05:38.460 --> 00:05:40.180 |
|
the network, which can make it like a |
|
|
|
00:05:40.180 --> 00:05:41.620 |
|
very highly compressed form. |
|
|
|
00:05:42.770 --> 00:05:45.140 |
|
You can also encode 3D shapes with |
|
|
|
00:05:45.140 --> 00:05:47.360 |
|
similar things where you Map from XYZ |
|
|
|
00:05:47.360 --> 00:05:49.440 |
|
to some kind of occupancy value whether |
|
|
|
00:05:49.440 --> 00:05:52.070 |
|
a point in the scene is inside a |
|
|
|
00:05:52.070 --> 00:05:52.830 |
|
surface or not. |
|
|
|
00:05:53.820 --> 00:05:56.550 |
|
You can encode MRI images by mapping |
|
|
|
00:05:56.550 --> 00:05:59.775 |
|
XYZ to density, and you can even create |
|
|
|
00:05:59.775 --> 00:06:02.820 |
|
3D models by solving for. |
|
|
|
00:06:03.460 --> 00:06:06.750 |
|
The intensities of all the images given |
|
|
|
00:06:06.750 --> 00:06:08.870 |
|
the position poses of the images. |
|
|
|
00:06:09.830 --> 00:06:11.780 |
|
I think we're here first and then. |
|
|
|
00:06:13.320 --> 00:06:13.730 |
|
Yeah. |
|
|
|
00:06:21.890 --> 00:06:25.010 |
|
So L1 and L2 are distances. |
|
|
|
00:06:25.010 --> 00:06:27.463 |
|
L1 is the sum of absolute differences |
|
|
|
00:06:27.463 --> 00:06:29.760 |
|
of two vectors, so they're both like |
|
|
|
00:06:29.760 --> 00:06:32.250 |
|
distance vector vector distances. |
|
|
|
00:06:33.240 --> 00:06:35.600 |
|
L1 is the sum of absolute differences |
|
|
|
00:06:35.600 --> 00:06:35.770 |
|
in. |
|
|
|
00:06:35.770 --> 00:06:38.620 |
|
L2 is the square root of the sum of |
|
|
|
00:06:38.620 --> 00:06:39.600 |
|
squared distances. |
|
|
|
00:06:40.640 --> 00:06:43.165 |
|
They're like so like my L2 distance to |
|
|
|
00:06:43.165 --> 00:06:45.060 |
|
that corner is if I just take a |
|
|
|
00:06:45.060 --> 00:06:47.337 |
|
straight line to that corner and my L1 |
|
|
|
00:06:47.337 --> 00:06:49.334 |
|
distance is if I like walk in this |
|
|
|
00:06:49.334 --> 00:06:51.081 |
|
direction and then I walk in this |
|
|
|
00:06:51.081 --> 00:06:52.890 |
|
direction and then I keep doing that. |
|
|
|
00:06:56.160 --> 00:06:56.420 |
|
Yep. |
|
|
|
00:07:01.980 --> 00:07:03.210 |
|
Yeah, right. |
|
|
|
00:07:03.210 --> 00:07:04.020 |
|
Exactly. |
|
|
|
00:07:04.020 --> 00:07:06.030 |
|
So it's just taking XY coordinates and |
|
|
|
00:07:06.030 --> 00:07:07.210 |
|
it's predicting the color. |
|
|
|
00:07:07.210 --> 00:07:07.420 |
|
Yep. |
|
|
|
00:07:08.870 --> 00:07:11.990 |
|
And so it's not like. |
|
|
|
00:07:11.990 --> 00:07:14.120 |
|
So you might be thinking like why would |
|
|
|
00:07:14.120 --> 00:07:14.750 |
|
you do this? |
|
|
|
00:07:14.750 --> 00:07:16.235 |
|
Or like what's the point of doing that |
|
|
|
00:07:16.235 --> 00:07:17.210 |
|
for an image? |
|
|
|
00:07:17.210 --> 00:07:18.440 |
|
It could be for compression. |
|
|
|
00:07:19.230 --> 00:07:20.930 |
|
But the really amazing thing, I mean |
|
|
|
00:07:20.930 --> 00:07:23.550 |
|
this is the basic idea behind this |
|
|
|
00:07:23.550 --> 00:07:24.620 |
|
technique called NERF. |
|
|
|
00:07:25.280 --> 00:07:27.950 |
|
Which is an exploding topic and |
|
|
|
00:07:27.950 --> 00:07:28.750 |
|
computer vision. |
|
|
|
00:07:29.550 --> 00:07:32.210 |
|
And the surprising thing is that if you |
|
|
|
00:07:32.210 --> 00:07:34.830 |
|
have a bunch of images, where the |
|
|
|
00:07:34.830 --> 00:07:37.020 |
|
positions of those images in 3D space |
|
|
|
00:07:37.020 --> 00:07:37.800 |
|
and where they're looking? |
|
|
|
00:07:38.580 --> 00:07:42.190 |
|
And you simply solve to map from the |
|
|
|
00:07:42.190 --> 00:07:45.333 |
|
pixel or from the array, like through a |
|
|
|
00:07:45.333 --> 00:07:47.370 |
|
pixel of each image, or from a 3D point |
|
|
|
00:07:47.370 --> 00:07:51.065 |
|
and direction into the color of the |
|
|
|
00:07:51.065 --> 00:07:53.860 |
|
image that observes that point or that |
|
|
|
00:07:53.860 --> 00:07:54.160 |
|
Ray. |
|
|
|
00:07:54.910 --> 00:07:59.130 |
|
You can solve like if you optimize that |
|
|
|
00:07:59.130 --> 00:07:59.980 |
|
problem. |
|
|
|
00:07:59.980 --> 00:08:02.700 |
|
Then you solve for kind of like colored |
|
|
|
00:08:02.700 --> 00:08:06.300 |
|
3D scene that allows you to draw new |
|
|
|
00:08:06.300 --> 00:08:08.660 |
|
pictures from arbitrary positions and |
|
|
|
00:08:08.660 --> 00:08:09.830 |
|
they look photorealistic. |
|
|
|
00:08:10.820 --> 00:08:13.020 |
|
So the network actually discovers the |
|
|
|
00:08:13.020 --> 00:08:14.820 |
|
underlying geometry because it's the |
|
|
|
00:08:14.820 --> 00:08:16.480 |
|
simplest explanation for the |
|
|
|
00:08:16.480 --> 00:08:17.910 |
|
intensities that are observed in all |
|
|
|
00:08:17.910 --> 00:08:18.480 |
|
these pictures. |
|
|
|
00:08:22.340 --> 00:08:24.576 |
|
So the network is pretty simple, it's |
|
|
|
00:08:24.576 --> 00:08:25.720 |
|
just a four layer. |
|
|
|
00:08:25.720 --> 00:08:27.960 |
|
They use 6 layers for this nerve |
|
|
|
00:08:27.960 --> 00:08:29.868 |
|
problem, but for all the others it's |
|
|
|
00:08:29.868 --> 00:08:31.159 |
|
just a four layer network. |
|
|
|
00:08:32.090 --> 00:08:35.100 |
|
They're linear layers followed by ReLU, |
|
|
|
00:08:35.100 --> 00:08:36.560 |
|
except on the Output. |
|
|
|
00:08:36.560 --> 00:08:39.940 |
|
For RGB for example, you have a Sigmoid |
|
|
|
00:08:39.940 --> 00:08:41.450 |
|
so that you map it to a zero to 1 |
|
|
|
00:08:41.450 --> 00:08:41.850 |
|
value. |
|
|
|
00:08:43.820 --> 00:08:47.040 |
|
And one of the points of the paper is |
|
|
|
00:08:47.040 --> 00:08:49.610 |
|
that if you try to encode the pixel |
|
|
|
00:08:49.610 --> 00:08:52.510 |
|
positions directly, it kind of works, |
|
|
|
00:08:52.510 --> 00:08:55.520 |
|
but you get these results shown above |
|
|
|
00:08:55.520 --> 00:08:57.806 |
|
where, oops, sorry, these results shown |
|
|
|
00:08:57.806 --> 00:08:59.530 |
|
above where it's like pretty blurry. |
|
|
|
00:09:00.190 --> 00:09:02.180 |
|
And the reason for that is that the |
|
|
|
00:09:02.180 --> 00:09:05.230 |
|
mapping from pixel position. |
|
|
|
00:09:05.940 --> 00:09:08.430 |
|
To color is very nonlinear. |
|
|
|
00:09:09.420 --> 00:09:09.830 |
|
So. |
|
|
|
00:09:10.610 --> 00:09:12.050 |
|
Essentially you can think of the |
|
|
|
00:09:12.050 --> 00:09:15.550 |
|
Networks in as I talked about with like |
|
|
|
00:09:15.550 --> 00:09:18.120 |
|
kernel representations and the duality |
|
|
|
00:09:18.120 --> 00:09:19.290 |
|
of linear models. |
|
|
|
00:09:20.080 --> 00:09:21.900 |
|
You can think about linear models as |
|
|
|
00:09:21.900 --> 00:09:24.770 |
|
effectively saying that the similarity |
|
|
|
00:09:24.770 --> 00:09:26.510 |
|
of two points is based on their dot |
|
|
|
00:09:26.510 --> 00:09:27.759 |
|
product, like the product of |
|
|
|
00:09:27.760 --> 00:09:29.260 |
|
corresponding elements summed together. |
|
|
|
00:09:30.110 --> 00:09:32.030 |
|
And if you take the dot product of two |
|
|
|
00:09:32.030 --> 00:09:33.600 |
|
pixel positions, it doesn't reflect |
|
|
|
00:09:33.600 --> 00:09:35.410 |
|
their similarity at all really. |
|
|
|
00:09:35.410 --> 00:09:36.570 |
|
So like if you get. |
|
|
|
00:09:37.640 --> 00:09:40.361 |
|
Two pixel positions in the that are |
|
|
|
00:09:40.361 --> 00:09:42.240 |
|
high that are next to each other. |
|
|
|
00:09:42.240 --> 00:09:43.500 |
|
When you take the dot product, it's |
|
|
|
00:09:43.500 --> 00:09:44.490 |
|
still a very high value. |
|
|
|
00:09:47.010 --> 00:09:50.730 |
|
If you transform those features using |
|
|
|
00:09:50.730 --> 00:09:53.840 |
|
sinusoidal encoding, so you just |
|
|
|
00:09:53.840 --> 00:09:55.630 |
|
compute sines and cosines of the |
|
|
|
00:09:55.630 --> 00:09:58.830 |
|
original positions, then it makes it so |
|
|
|
00:09:58.830 --> 00:10:00.366 |
|
that if you take the dot product of |
|
|
|
00:10:00.366 --> 00:10:01.340 |
|
those encoded. |
|
|
|
00:10:02.330 --> 00:10:03.610 |
|
Positions. |
|
|
|
00:10:03.610 --> 00:10:06.280 |
|
Then positions that are very close |
|
|
|
00:10:06.280 --> 00:10:07.870 |
|
together will have high similarity. |
|
|
|
00:10:10.000 --> 00:10:11.830 |
|
So that's in a nutshell. |
|
|
|
00:10:11.900 --> 00:10:15.490 |
|
At the idea, I mean there's like a |
|
|
|
00:10:15.490 --> 00:10:15.990 |
|
whole. |
|
|
|
00:10:17.680 --> 00:10:20.410 |
|
Theory and stuff behind it, but that's |
|
|
|
00:10:20.410 --> 00:10:21.650 |
|
the basic idea, is that they have a |
|
|
|
00:10:21.650 --> 00:10:23.760 |
|
simple transformation that makes this |
|
|
|
00:10:23.760 --> 00:10:25.920 |
|
mapping more, that makes this |
|
|
|
00:10:25.920 --> 00:10:28.170 |
|
similarity more linear, and that |
|
|
|
00:10:28.170 --> 00:10:30.320 |
|
enables you to get high frequency |
|
|
|
00:10:30.320 --> 00:10:31.850 |
|
images and stuff. |
|
|
|
00:10:31.850 --> 00:10:33.660 |
|
You can include high frequency images |
|
|
|
00:10:33.660 --> 00:10:33.920 |
|
better. |
|
|
|
00:10:37.910 --> 00:10:39.990 |
|
Right, so I want to spend a little time |
|
|
|
00:10:39.990 --> 00:10:42.700 |
|
talking about homework two and. |
|
|
|
00:10:43.580 --> 00:10:44.350 |
|
I'm also going. |
|
|
|
00:10:44.350 --> 00:10:45.860 |
|
I can also take questions. |
|
|
|
00:10:45.860 --> 00:10:49.180 |
|
This is due in about 12 days or so. |
|
|
|
00:10:50.520 --> 00:10:51.270 |
|
11 days. |
|
|
|
00:10:52.260 --> 00:10:52.890 |
|
Yeah, mine. |
|
|
|
00:10:53.690 --> 00:10:56.280 |
|
I'm on vgas, unfortunately so. |
|
|
|
00:10:57.460 --> 00:11:02.560 |
|
My Size of things is annoyingly small |
|
|
|
00:11:02.560 --> 00:11:03.300 |
|
and stretched. |
|
|
|
00:11:09.940 --> 00:11:13.630 |
|
Take things down from like 4K to 480. |
|
|
|
00:11:18.060 --> 00:11:18.420 |
|
All right. |
|
|
|
00:11:20.890 --> 00:11:23.130 |
|
So for homework two first overview, |
|
|
|
00:11:23.130 --> 00:11:23.940 |
|
there's three parts. |
|
|
|
00:11:25.270 --> 00:11:26.430 |
|
Alright, I guess I won't overview. |
|
|
|
00:11:26.430 --> 00:11:27.180 |
|
I'll go into each part. |
|
|
|
00:11:27.850 --> 00:11:30.260 |
|
So the first part is and I'll take |
|
|
|
00:11:30.260 --> 00:11:30.695 |
|
questions. |
|
|
|
00:11:30.695 --> 00:11:32.520 |
|
I'll just describe it briefly and then |
|
|
|
00:11:32.520 --> 00:11:34.000 |
|
see if anybody has any clarifying |
|
|
|
00:11:34.000 --> 00:11:34.542 |
|
questions. |
|
|
|
00:11:34.542 --> 00:11:38.160 |
|
The first part is to look at like bias |
|
|
|
00:11:38.160 --> 00:11:41.130 |
|
variants and tree tree models. |
|
|
|
00:11:42.470 --> 00:11:44.620 |
|
So we're doing the same temperature |
|
|
|
00:11:44.620 --> 00:11:46.340 |
|
problem that we saw in homework one. |
|
|
|
00:11:47.260 --> 00:11:48.990 |
|
Same exact features and labels. |
|
|
|
00:11:49.920 --> 00:11:52.200 |
|
And we are going to look at three |
|
|
|
00:11:52.200 --> 00:11:54.870 |
|
different kinds of models, regression |
|
|
|
00:11:54.870 --> 00:11:55.410 |
|
trees. |
|
|
|
00:11:56.850 --> 00:11:59.590 |
|
Random forests and boosted regression |
|
|
|
00:11:59.590 --> 00:12:02.020 |
|
trees, and in particular we're using |
|
|
|
00:12:02.020 --> 00:12:04.510 |
|
like this Gradient boost method, but |
|
|
|
00:12:04.510 --> 00:12:06.400 |
|
the type of boosting is not really |
|
|
|
00:12:06.400 --> 00:12:07.459 |
|
important and we're not going to |
|
|
|
00:12:07.460 --> 00:12:08.500 |
|
implement it, we're just going to use |
|
|
|
00:12:08.500 --> 00:12:08.910 |
|
the library. |
|
|
|
00:12:09.670 --> 00:12:11.055 |
|
So what we're going to do is we're |
|
|
|
00:12:11.055 --> 00:12:13.250 |
|
going to test what is the Training |
|
|
|
00:12:13.250 --> 00:12:15.350 |
|
error and the validation error. |
|
|
|
00:12:15.960 --> 00:12:18.000 |
|
For five different depths. |
|
|
|
00:12:19.450 --> 00:12:22.170 |
|
And these five depths meaning how deep |
|
|
|
00:12:22.170 --> 00:12:22.910 |
|
do we grow the tree? |
|
|
|
00:12:24.220 --> 00:12:27.192 |
|
And then we're going to plot it and |
|
|
|
00:12:27.192 --> 00:12:28.810 |
|
then answer some questions about it. |
|
|
|
00:12:30.180 --> 00:12:32.410 |
|
So looking at this Starter code. |
|
|
|
00:12:38.400 --> 00:12:39.980 |
|
So this is just loading the temperature |
|
|
|
00:12:39.980 --> 00:12:40.260 |
|
data. |
|
|
|
00:12:40.260 --> 00:12:42.523 |
|
It's the same as before plotting it, |
|
|
|
00:12:42.523 --> 00:12:44.340 |
|
just to give a sense of what it means. |
|
|
|
00:12:46.640 --> 00:12:47.470 |
|
And then I've got. |
|
|
|
00:12:48.320 --> 00:12:49.460 |
|
This error. |
|
|
|
00:12:51.440 --> 00:12:53.580 |
|
This function is included to plot the |
|
|
|
00:12:53.580 --> 00:12:56.570 |
|
errors and it's just taking as input |
|
|
|
00:12:56.570 --> 00:12:58.560 |
|
the that Depth array. |
|
|
|
00:12:59.320 --> 00:13:02.280 |
|
And corresponding list surveys that |
|
|
|
00:13:02.280 --> 00:13:05.670 |
|
store the Training error and validation |
|
|
|
00:13:05.670 --> 00:13:08.240 |
|
error for each Model. |
|
|
|
00:13:09.110 --> 00:13:12.756 |
|
Training error means the RMSE error on |
|
|
|
00:13:12.756 --> 00:13:14.982 |
|
the training set and validation means |
|
|
|
00:13:14.982 --> 00:13:17.360 |
|
the validation error on the validation |
|
|
|
00:13:17.360 --> 00:13:18.819 |
|
I mean the error on the validation set. |
|
|
|
00:13:21.850 --> 00:13:22.420 |
|
These are. |
|
|
|
00:13:22.420 --> 00:13:27.230 |
|
I provide the code to compute a given |
|
|
|
00:13:27.230 --> 00:13:29.070 |
|
or to initialize a given model. |
|
|
|
00:13:29.070 --> 00:13:31.950 |
|
So the you can create this model, you |
|
|
|
00:13:31.950 --> 00:13:33.700 |
|
can do Model dot fit with the training |
|
|
|
00:13:33.700 --> 00:13:36.220 |
|
data and Model dot. |
|
|
|
00:13:37.270 --> 00:13:40.730 |
|
And then you can like compute the RMSE, |
|
|
|
00:13:40.730 --> 00:13:42.555 |
|
evaluate the validation data and |
|
|
|
00:13:42.555 --> 00:13:43.310 |
|
compute RMSE. |
|
|
|
00:13:43.310 --> 00:13:44.960 |
|
So it's like it's not meant to be. |
|
|
|
00:13:44.960 --> 00:13:46.430 |
|
It's not like it's not like an |
|
|
|
00:13:46.430 --> 00:13:48.135 |
|
algorithm coding problem, it's more of |
|
|
|
00:13:48.135 --> 00:13:49.990 |
|
an evaluation and analysis problem. |
|
|
|
00:13:52.180 --> 00:13:53.330 |
|
No, you don't need to code these |
|
|
|
00:13:53.330 --> 00:13:53.725 |
|
functions. |
|
|
|
00:13:53.725 --> 00:13:54.600 |
|
You just call this. |
|
|
|
00:13:56.450 --> 00:13:58.200 |
|
So you would for example call the |
|
|
|
00:13:58.200 --> 00:13:58.960 |
|
decision tree. |
|
|
|
00:13:58.960 --> 00:14:00.990 |
|
You'd do a loop through the Max Steps. |
|
|
|
00:14:01.920 --> 00:14:04.440 |
|
Call for each of these you like. |
|
|
|
00:14:04.440 --> 00:14:07.159 |
|
Instantiate the Model, fit, predict on |
|
|
|
00:14:07.160 --> 00:14:08.510 |
|
train, predict on test. |
|
|
|
00:14:09.570 --> 00:14:12.080 |
|
Compute the RMSE error. |
|
|
|
00:14:13.080 --> 00:14:15.250 |
|
If you want to use built-in scoring |
|
|
|
00:14:15.250 --> 00:14:17.450 |
|
functions to compute RMSE, it's fine |
|
|
|
00:14:17.450 --> 00:14:18.860 |
|
with me as long as it's accurate. |
|
|
|
00:14:20.850 --> 00:14:22.910 |
|
And then you and then you record them |
|
|
|
00:14:22.910 --> 00:14:24.650 |
|
and then you plot it with this |
|
|
|
00:14:24.650 --> 00:14:25.170 |
|
Function. |
|
|
|
00:14:28.350 --> 00:14:30.280 |
|
And. |
|
|
|
00:14:30.710 --> 00:14:33.190 |
|
So let's look at the report template a |
|
|
|
00:14:33.190 --> 00:14:33.610 |
|
little bit. |
|
|
|
00:14:34.300 --> 00:14:36.830 |
|
Right, so just generating that plot is |
|
|
|
00:14:36.830 --> 00:14:37.890 |
|
worth 10 points. |
|
|
|
00:14:38.540 --> 00:14:42.580 |
|
And analyzing the result is worth 20 |
|
|
|
00:14:42.580 --> 00:14:43.070 |
|
points. |
|
|
|
00:14:43.070 --> 00:14:44.900 |
|
So there's more points for answering |
|
|
|
00:14:44.900 --> 00:14:45.780 |
|
questions about it, yeah. |
|
|
|
00:15:01.480 --> 00:15:04.110 |
|
So it's in some cases it's pretty |
|
|
|
00:15:04.110 --> 00:15:05.490 |
|
literally from the plot. |
|
|
|
00:15:05.490 --> 00:15:06.980 |
|
For example, for regression trees, |
|
|
|
00:15:06.980 --> 00:15:08.610 |
|
which tree Depth achieves minimum |
|
|
|
00:15:08.610 --> 00:15:09.730 |
|
validation error? |
|
|
|
00:15:09.730 --> 00:15:11.100 |
|
That's something that you should be |
|
|
|
00:15:11.100 --> 00:15:11.600 |
|
able to. |
|
|
|
00:15:12.400 --> 00:15:14.430 |
|
Basically, read directly from the plot. |
|
|
|
00:15:14.430 --> 00:15:18.200 |
|
In other cases it requires some other |
|
|
|
00:15:18.200 --> 00:15:20.170 |
|
knowledge and interpretation, so for |
|
|
|
00:15:20.170 --> 00:15:20.820 |
|
example. |
|
|
|
00:15:22.310 --> 00:15:24.955 |
|
Deputies trees seem to perform better |
|
|
|
00:15:24.955 --> 00:15:26.580 |
|
with smaller or larger trees. |
|
|
|
00:15:26.580 --> 00:15:27.040 |
|
Why? |
|
|
|
00:15:27.040 --> 00:15:28.474 |
|
So whether they perform better with |
|
|
|
00:15:28.474 --> 00:15:29.960 |
|
smaller or larger trees is something |
|
|
|
00:15:29.960 --> 00:15:31.760 |
|
you can observe directly from the plot, |
|
|
|
00:15:31.760 --> 00:15:33.900 |
|
but the Y is like applying your |
|
|
|
00:15:33.900 --> 00:15:34.840 |
|
understanding of. |
|
|
|
00:15:35.880 --> 00:15:38.120 |
|
Bias variance in the tree algorithm to |
|
|
|
00:15:38.120 --> 00:15:40.555 |
|
be able to say why what you observe is |
|
|
|
00:15:40.555 --> 00:15:40.940 |
|
the case. |
|
|
|
00:15:43.360 --> 00:15:45.500 |
|
Likewise, like model is least pruned to |
|
|
|
00:15:45.500 --> 00:15:45.870 |
|
overfitting. |
|
|
|
00:15:45.870 --> 00:15:48.170 |
|
You can observe that if you understand |
|
|
|
00:15:48.170 --> 00:15:49.660 |
|
what overfitting means directly in the |
|
|
|
00:15:49.660 --> 00:15:52.150 |
|
plot, but again like the Y requires |
|
|
|
00:15:52.150 --> 00:15:52.990 |
|
some understanding. |
|
|
|
00:15:53.750 --> 00:15:57.850 |
|
And which model has the lowest bias |
|
|
|
00:15:57.850 --> 00:15:59.470 |
|
requires that you understand what bias |
|
|
|
00:15:59.470 --> 00:16:01.230 |
|
means, but if you do, then you can read |
|
|
|
00:16:01.230 --> 00:16:03.380 |
|
it directly from the plot as well. |
|
|
|
00:16:05.360 --> 00:16:05.630 |
|
Yeah. |
|
|
|
00:16:10.460 --> 00:16:10.790 |
|
OK. |
|
|
|
00:16:10.790 --> 00:16:12.770 |
|
Any other questions about part one? |
|
|
|
00:16:15.580 --> 00:16:18.060 |
|
OK, so Part 2. |
|
|
|
00:16:18.740 --> 00:16:22.110 |
|
Is going back to MNIST again and we |
|
|
|
00:16:22.110 --> 00:16:23.740 |
|
will move beyond these data sets for |
|
|
|
00:16:23.740 --> 00:16:24.040 |
|
homework. |
|
|
|
00:16:24.040 --> 00:16:25.490 |
|
Three but. |
|
|
|
00:16:27.350 --> 00:16:30.230 |
|
But going back to MNIST and now and now |
|
|
|
00:16:30.230 --> 00:16:30.820 |
|
like. |
|
|
|
00:16:32.200 --> 00:16:34.570 |
|
Applying MLPS to MNIST. |
|
|
|
00:16:36.910 --> 00:16:39.470 |
|
So let's go to the Starter code again. |
|
|
|
00:16:43.390 --> 00:16:45.160 |
|
Right, so this is the same code as |
|
|
|
00:16:45.160 --> 00:16:47.210 |
|
before, just to load the MNIST data. |
|
|
|
00:16:47.210 --> 00:16:48.800 |
|
We're not going to actually use like |
|
|
|
00:16:48.800 --> 00:16:51.052 |
|
different the sub splits, we're just |
|
|
|
00:16:51.052 --> 00:16:52.730 |
|
going to use the full training set. |
|
|
|
00:16:53.430 --> 00:16:54.390 |
|
And validation set. |
|
|
|
00:16:56.230 --> 00:16:58.690 |
|
There's some code here to OK, so let me |
|
|
|
00:16:58.690 --> 00:17:01.090 |
|
first talk about what the problem is. |
|
|
|
00:17:02.100 --> 00:17:03.770 |
|
So you're going to train a network. |
|
|
|
00:17:03.770 --> 00:17:05.570 |
|
We give you a starting like learning |
|
|
|
00:17:05.570 --> 00:17:08.290 |
|
rate and optimizer to use and Batch |
|
|
|
00:17:08.290 --> 00:17:08.780 |
|
Size. |
|
|
|
00:17:09.520 --> 00:17:11.870 |
|
And you record the training and the |
|
|
|
00:17:11.870 --> 00:17:14.690 |
|
validation loss after each epoch. |
|
|
|
00:17:15.680 --> 00:17:16.890 |
|
That's the cycle through the training |
|
|
|
00:17:16.890 --> 00:17:17.090 |
|
data. |
|
|
|
00:17:17.800 --> 00:17:19.505 |
|
And then you compute the validation of |
|
|
|
00:17:19.505 --> 00:17:21.590 |
|
the final model, and then you report |
|
|
|
00:17:21.590 --> 00:17:24.010 |
|
some of these errors and losses in the |
|
|
|
00:17:24.010 --> 00:17:24.350 |
|
report. |
|
|
|
00:17:25.030 --> 00:17:27.375 |
|
And then we say try some different |
|
|
|
00:17:27.375 --> 00:17:28.600 |
|
learning rates. |
|
|
|
00:17:28.600 --> 00:17:31.340 |
|
So vary that ETA the learning rate of |
|
|
|
00:17:31.340 --> 00:17:32.410 |
|
your optimizer. |
|
|
|
00:17:33.090 --> 00:17:35.750 |
|
And again compare. |
|
|
|
00:17:35.750 --> 00:17:38.050 |
|
Create these plots of the Training |
|
|
|
00:17:38.050 --> 00:17:39.630 |
|
validation loss and compare them for |
|
|
|
00:17:39.630 --> 00:17:40.400 |
|
different learning rate. |
|
|
|
00:17:41.640 --> 00:17:42.070 |
|
Question. |
|
|
|
00:17:47.510 --> 00:17:50.610 |
|
It's in some ways it's an arbitrary |
|
|
|
00:17:50.610 --> 00:17:52.520 |
|
choice, but Pi torch is a really |
|
|
|
00:17:52.520 --> 00:17:54.310 |
|
popular package for Deep Learning. |
|
|
|
00:17:54.310 --> 00:17:55.730 |
|
So like there are others but. |
|
|
|
00:17:56.340 --> 00:17:59.133 |
|
Since we're since we're using Python, I |
|
|
|
00:17:59.133 --> 00:18:01.110 |
|
would use a Python package and it's |
|
|
|
00:18:01.110 --> 00:18:01.515 |
|
just like. |
|
|
|
00:18:01.515 --> 00:18:03.260 |
|
I would say that probably like the most |
|
|
|
00:18:03.260 --> 00:18:04.710 |
|
popular framework right now. |
|
|
|
00:18:08.830 --> 00:18:11.490 |
|
Yeah, tensor flow is also another, |
|
|
|
00:18:11.490 --> 00:18:12.830 |
|
would be another good candidate. |
|
|
|
00:18:12.830 --> 00:18:15.120 |
|
Or Keras I guess, which is I think |
|
|
|
00:18:15.120 --> 00:18:16.220 |
|
based on tensor flow maybe. |
|
|
|
00:18:17.350 --> 00:18:19.596 |
|
But yeah, we're using torch. |
|
|
|
00:18:19.596 --> 00:18:20.920 |
|
Yeah, there's no like. |
|
|
|
00:18:20.920 --> 00:18:22.670 |
|
I don't have anything against the other |
|
|
|
00:18:22.670 --> 00:18:25.600 |
|
packages, but I think π torch is. |
|
|
|
00:18:26.740 --> 00:18:29.760 |
|
Probably one of the more still probably |
|
|
|
00:18:29.760 --> 00:18:30.840 |
|
edges out tensor flow. |
|
|
|
00:18:30.840 --> 00:18:32.460 |
|
Right now is the most popular I would |
|
|
|
00:18:32.460 --> 00:18:32.590 |
|
say. |
|
|
|
00:18:34.580 --> 00:18:35.170 |
|
|
|
|
|
00:18:37.650 --> 00:18:41.452 |
|
Then finally you try to like. |
|
|
|
00:18:41.452 --> 00:18:42.840 |
|
You can adjust the learning rate and |
|
|
|
00:18:42.840 --> 00:18:44.305 |
|
the hidden layer size and other things |
|
|
|
00:18:44.305 --> 00:18:45.890 |
|
to try to improve the network and you |
|
|
|
00:18:45.890 --> 00:18:48.460 |
|
should be able to get validation error |
|
|
|
00:18:48.460 --> 00:18:49.292 |
|
less than 25. |
|
|
|
00:18:49.292 --> 00:18:50.800 |
|
So this is basically. |
|
|
|
00:18:50.800 --> 00:18:53.180 |
|
I just chose this because like in a few |
|
|
|
00:18:53.180 --> 00:18:55.200 |
|
minutes or down now 15 minutes of |
|
|
|
00:18:55.200 --> 00:18:55.522 |
|
experimentation. |
|
|
|
00:18:55.522 --> 00:18:57.376 |
|
This is like roughly what I was able to |
|
|
|
00:18:57.376 --> 00:18:57.509 |
|
get. |
|
|
|
00:18:58.730 --> 00:18:59.280 |
|
|
|
|
|
00:19:00.200 --> 00:19:00.940 |
|
So. |
|
|
|
00:19:01.790 --> 00:19:02.940 |
|
If we look at the. |
|
|
|
00:19:06.020 --> 00:19:07.730 |
|
So then we have like. |
|
|
|
00:19:07.730 --> 00:19:09.610 |
|
So basically the main part of the code |
|
|
|
00:19:09.610 --> 00:19:11.070 |
|
that you need to write is in here. |
|
|
|
00:19:11.070 --> 00:19:14.580 |
|
So where you have the training and it's |
|
|
|
00:19:14.580 --> 00:19:16.999 |
|
pretty similar to the example that I |
|
|
|
00:19:17.000 --> 00:19:18.220 |
|
gave in class. |
|
|
|
00:19:18.220 --> 00:19:20.244 |
|
But the biggest difference is that in |
|
|
|
00:19:20.244 --> 00:19:22.040 |
|
the example I did in class. |
|
|
|
00:19:22.800 --> 00:19:25.380 |
|
It's a binary problem and so you |
|
|
|
00:19:25.380 --> 00:19:27.254 |
|
represent you have only one output, and |
|
|
|
00:19:27.254 --> 00:19:29.034 |
|
if that Output is negative then it |
|
|
|
00:19:29.034 --> 00:19:30.259 |
|
indicates one class and if it's |
|
|
|
00:19:30.260 --> 00:19:31.870 |
|
positive it indicates another class. |
|
|
|
00:19:32.820 --> 00:19:33.380 |
|
If you have. |
|
|
|
00:19:34.120 --> 00:19:35.810 |
|
Multiple classes. |
|
|
|
00:19:36.170 --> 00:19:36.730 |
|
|
|
|
|
00:19:37.510 --> 00:19:38.920 |
|
That obviously doesn't work. |
|
|
|
00:19:38.920 --> 00:19:40.825 |
|
You can't represent it with one Output. |
|
|
|
00:19:40.825 --> 00:19:43.980 |
|
You instead need to Output one value |
|
|
|
00:19:43.980 --> 00:19:45.200 |
|
for each of your classes. |
|
|
|
00:19:45.200 --> 00:19:46.645 |
|
So if you have three classes, if you |
|
|
|
00:19:46.645 --> 00:19:48.009 |
|
have two classes, you can have one |
|
|
|
00:19:48.009 --> 00:19:48.253 |
|
Output. |
|
|
|
00:19:48.253 --> 00:19:50.209 |
|
If you have three classes, you need 3 |
|
|
|
00:19:50.210 --> 00:19:50.540 |
|
outputs. |
|
|
|
00:19:51.280 --> 00:19:54.060 |
|
You have one output for each class and |
|
|
|
00:19:54.060 --> 00:19:57.020 |
|
that Output you. |
|
|
|
00:19:57.020 --> 00:19:58.780 |
|
Depending on how you set up your loss, |
|
|
|
00:19:58.780 --> 00:20:02.450 |
|
it can either be a probability, so zero |
|
|
|
00:20:02.450 --> 00:20:04.450 |
|
to one, or it can be a logic. |
|
|
|
00:20:05.530 --> 00:20:08.690 |
|
Negative Infinity to Infinity the log |
|
|
|
00:20:08.690 --> 00:20:09.430 |
|
class ratio. |
|
|
|
00:20:13.080 --> 00:20:17.090 |
|
And then you need to like reformat |
|
|
|
00:20:17.090 --> 00:20:20.043 |
|
instead of representing the label as |
|
|
|
00:20:20.043 --> 00:20:22.069 |
|
like 0123456789. |
|
|
|
00:20:22.680 --> 00:20:24.390 |
|
You represent it with what's called A1 |
|
|
|
00:20:24.390 --> 00:20:26.640 |
|
hot vector and it's explained what that |
|
|
|
00:20:26.640 --> 00:20:27.250 |
|
is in the Tips. |
|
|
|
00:20:27.250 --> 00:20:30.370 |
|
But basically A3 is represented as like |
|
|
|
00:20:30.370 --> 00:20:33.479 |
|
you have a ten element vector and the |
|
|
|
00:20:33.480 --> 00:20:35.830 |
|
third value of that vector is 1 and all |
|
|
|
00:20:35.830 --> 00:20:37.480 |
|
the other values are zero. |
|
|
|
00:20:37.480 --> 00:20:39.370 |
|
So it's like you just represent which |
|
|
|
00:20:39.370 --> 00:20:42.210 |
|
of these ten labels is on for this |
|
|
|
00:20:42.210 --> 00:20:42.760 |
|
example. |
|
|
|
00:20:45.180 --> 00:20:46.070 |
|
Otherwise. |
|
|
|
00:20:47.420 --> 00:20:49.010 |
|
That makes some small differences and |
|
|
|
00:20:49.010 --> 00:20:52.170 |
|
how you compute loss just like code |
|
|
|
00:20:52.170 --> 00:20:54.500 |
|
wise, but otherwise it's essentially |
|
|
|
00:20:54.500 --> 00:20:54.890 |
|
the same. |
|
|
|
00:20:55.860 --> 00:20:57.090 |
|
I also have. |
|
|
|
00:21:00.540 --> 00:21:02.420 |
|
And one more. |
|
|
|
00:21:02.420 --> 00:21:02.730 |
|
OK. |
|
|
|
00:21:02.730 --> 00:21:04.640 |
|
So first let me go to the report for |
|
|
|
00:21:04.640 --> 00:21:04.750 |
|
that. |
|
|
|
00:21:05.500 --> 00:21:06.850 |
|
So your port, your training and your |
|
|
|
00:21:06.850 --> 00:21:09.600 |
|
validation loss and your curves, your |
|
|
|
00:21:09.600 --> 00:21:09.930 |
|
plots. |
|
|
|
00:21:11.230 --> 00:21:12.240 |
|
And your final losses? |
|
|
|
00:21:13.520 --> 00:21:15.630 |
|
I mean you're final errors. |
|
|
|
00:21:18.240 --> 00:21:18.920 |
|
|
|
|
|
00:21:21.010 --> 00:21:23.600 |
|
So what was I going to say? |
|
|
|
00:21:23.600 --> 00:21:24.040 |
|
Yes. |
|
|
|
00:21:24.040 --> 00:21:26.900 |
|
So the so the tips and tricks. |
|
|
|
00:21:30.700 --> 00:21:33.600 |
|
Are focused on the Part 2 because I |
|
|
|
00:21:33.600 --> 00:21:36.670 |
|
think part one is a little bit. |
|
|
|
00:21:36.670 --> 00:21:38.850 |
|
There's not that much to it really code |
|
|
|
00:21:38.850 --> 00:21:39.140 |
|
wise. |
|
|
|
00:21:41.720 --> 00:21:44.300 |
|
So there's if you're probably most of |
|
|
|
00:21:44.300 --> 00:21:46.933 |
|
you are new to π torch or Deep Learning |
|
|
|
00:21:46.933 --> 00:21:47.779 |
|
or MLP's. |
|
|
|
00:21:49.400 --> 00:21:51.520 |
|
So I would recommend looking at this |
|
|
|
00:21:51.520 --> 00:21:52.460 |
|
tutorial first. |
|
|
|
00:21:53.130 --> 00:21:56.170 |
|
And it explains it like pretty clearly |
|
|
|
00:21:56.170 --> 00:21:57.780 |
|
how to do things. |
|
|
|
00:21:57.780 --> 00:22:00.060 |
|
You can also like the code that I wrote |
|
|
|
00:22:00.060 --> 00:22:03.470 |
|
before is like mostly a lot of it can |
|
|
|
00:22:03.470 --> 00:22:04.390 |
|
be applied directly. |
|
|
|
00:22:05.180 --> 00:22:05.560 |
|
And it's. |
|
|
|
00:22:05.560 --> 00:22:09.250 |
|
Also the basic loop is down here so. |
|
|
|
00:22:10.470 --> 00:22:13.805 |
|
You shouldn't like abstractly it's not. |
|
|
|
00:22:13.805 --> 00:22:15.965 |
|
It's not necessarily that you can see |
|
|
|
00:22:15.965 --> 00:22:18.490 |
|
the slides and understand MLPS and know |
|
|
|
00:22:18.490 --> 00:22:19.690 |
|
exactly how you should code it. |
|
|
|
00:22:19.690 --> 00:22:21.490 |
|
You need you will need to look at the |
|
|
|
00:22:21.490 --> 00:22:23.830 |
|
tutorial or in like this code |
|
|
|
00:22:23.830 --> 00:22:24.210 |
|
structure. |
|
|
|
00:22:26.280 --> 00:22:28.840 |
|
Because it's using libraries still like |
|
|
|
00:22:28.840 --> 00:22:31.180 |
|
TORCH handles for us all the |
|
|
|
00:22:31.180 --> 00:22:33.230 |
|
optimization that you just specify a |
|
|
|
00:22:33.230 --> 00:22:35.829 |
|
loss, you specify your structure of the |
|
|
|
00:22:35.830 --> 00:22:37.130 |
|
network and then it kind of does |
|
|
|
00:22:37.130 --> 00:22:38.020 |
|
everything else for you. |
|
|
|
00:22:40.840 --> 00:22:43.355 |
|
OK, so the Tips also say how you set up |
|
|
|
00:22:43.355 --> 00:22:47.046 |
|
a data loader and the basic procedure, |
|
|
|
00:22:47.046 --> 00:22:50.585 |
|
how you get GPU to work on collabs and |
|
|
|
00:22:50.585 --> 00:22:53.988 |
|
how you can compute the softmax which |
|
|
|
00:22:53.988 --> 00:22:55.970 |
|
is the probability of a particular |
|
|
|
00:22:55.970 --> 00:22:56.300 |
|
label. |
|
|
|
00:22:56.300 --> 00:22:58.940 |
|
So this is like the probability of this |
|
|
|
00:22:58.940 --> 00:23:00.540 |
|
ground truth label Val I. |
|
|
|
00:23:01.730 --> 00:23:04.190 |
|
Given the data, if this is stored as |
|
|
|
00:23:04.190 --> 00:23:05.260 |
|
like a zero to 9 value. |
|
|
|
00:23:10.130 --> 00:23:12.900 |
|
Alright, any questions about two? |
|
|
|
00:23:12.980 --> 00:23:13.150 |
|
Yes. |
|
|
|
00:23:21.340 --> 00:23:25.230 |
|
So if you have multiple classes, that's |
|
|
|
00:23:25.230 --> 00:23:25.870 |
|
not what I want to do. |
|
|
|
00:23:26.770 --> 00:23:29.135 |
|
If you have multiple classes, then you |
|
|
|
00:23:29.135 --> 00:23:29.472 |
|
have. |
|
|
|
00:23:29.472 --> 00:23:31.594 |
|
Then at the Output layer you have |
|
|
|
00:23:31.594 --> 00:23:33.640 |
|
multiple nodes, and each of those nodes |
|
|
|
00:23:33.640 --> 00:23:35.010 |
|
are connected to the previous layer |
|
|
|
00:23:35.010 --> 00:23:36.080 |
|
with their own set of weights. |
|
|
|
00:23:37.600 --> 00:23:39.560 |
|
And so they use like the same |
|
|
|
00:23:39.560 --> 00:23:40.476 |
|
intermediate features. |
|
|
|
00:23:40.476 --> 00:23:42.800 |
|
They use the same representations that |
|
|
|
00:23:42.800 --> 00:23:45.360 |
|
are in the hidden layers or in the |
|
|
|
00:23:45.360 --> 00:23:46.950 |
|
inner layers of the network. |
|
|
|
00:23:46.950 --> 00:23:48.950 |
|
But they each have their own predictor |
|
|
|
00:23:48.950 --> 00:23:51.300 |
|
at the end, and so it actually it |
|
|
|
00:23:51.300 --> 00:23:53.270 |
|
doesn't it instead of producing a |
|
|
|
00:23:53.270 --> 00:23:55.210 |
|
single value, it produces an array of |
|
|
|
00:23:55.210 --> 00:23:55.700 |
|
values. |
|
|
|
00:23:56.460 --> 00:23:59.200 |
|
In that array will typically represent |
|
|
|
00:23:59.200 --> 00:24:00.690 |
|
like the probability of each class. |
|
|
|
00:24:04.970 --> 00:24:05.160 |
|
Yeah. |
|
|
|
00:24:10.980 --> 00:24:13.660 |
|
There to get the. |
|
|
|
00:24:13.820 --> 00:24:15.180 |
|
Loss for the validation set. |
|
|
|
00:24:15.180 --> 00:24:17.070 |
|
Your evaluate the validation examples |
|
|
|
00:24:17.070 --> 00:24:20.310 |
|
so call like X Val. |
|
|
|
00:24:21.210 --> 00:24:23.827 |
|
And then you compute the negative log |
|
|
|
00:24:23.827 --> 00:24:26.252 |
|
probability of the true Label given the |
|
|
|
00:24:26.252 --> 00:24:28.660 |
|
given the data, which will be based on |
|
|
|
00:24:28.660 --> 00:24:30.450 |
|
the outputs of your network. |
|
|
|
00:24:30.450 --> 00:24:31.985 |
|
So the network will give you the |
|
|
|
00:24:31.985 --> 00:24:33.130 |
|
probability of each class. |
|
|
|
00:24:33.830 --> 00:24:35.930 |
|
And then you sum the negative log |
|
|
|
00:24:35.930 --> 00:24:37.110 |
|
probability of the true class. |
|
|
|
00:24:47.700 --> 00:24:50.440 |
|
For each example for each class, yeah. |
|
|
|
00:24:53.590 --> 00:24:57.780 |
|
So Part 3 is. |
|
|
|
00:24:58.970 --> 00:25:01.350 |
|
More a data exploration problem in a |
|
|
|
00:25:01.350 --> 00:25:01.540 |
|
way. |
|
|
|
00:25:02.310 --> 00:25:06.190 |
|
So there's this data set, the Palmer |
|
|
|
00:25:06.190 --> 00:25:08.120 |
|
Archipelago Penguin data set. |
|
|
|
00:25:08.750 --> 00:25:10.650 |
|
That where they recorded various |
|
|
|
00:25:10.650 --> 00:25:13.270 |
|
measurements of Penguins and you're |
|
|
|
00:25:13.270 --> 00:25:14.740 |
|
trying to predict a species of the |
|
|
|
00:25:14.740 --> 00:25:15.150 |
|
Penguin. |
|
|
|
00:25:16.360 --> 00:25:18.140 |
|
And it had something original data had |
|
|
|
00:25:18.140 --> 00:25:20.270 |
|
some nans and stuff. |
|
|
|
00:25:20.270 --> 00:25:21.110 |
|
So we. |
|
|
|
00:25:21.910 --> 00:25:22.860 |
|
We like kind of. |
|
|
|
00:25:22.860 --> 00:25:23.850 |
|
I cleaned it up a bit. |
|
|
|
00:25:24.460 --> 00:25:25.690 |
|
Where we clean it up a bit. |
|
|
|
00:25:27.870 --> 00:25:31.300 |
|
And then in some of the Starter code we |
|
|
|
00:25:31.300 --> 00:25:34.600 |
|
turned some of the strings into one hot |
|
|
|
00:25:34.600 --> 00:25:37.470 |
|
vectors because Sklearn doesn't deal |
|
|
|
00:25:37.470 --> 00:25:38.120 |
|
with the strings. |
|
|
|
00:25:40.450 --> 00:25:43.680 |
|
So the first part is to like look at |
|
|
|
00:25:43.680 --> 00:25:44.560 |
|
some of the. |
|
|
|
00:25:45.730 --> 00:25:47.600 |
|
To just like do scatter plots if some |
|
|
|
00:25:47.600 --> 00:25:48.230 |
|
of the features. |
|
|
|
00:25:50.150 --> 00:25:52.060 |
|
And then in the report. |
|
|
|
00:25:53.820 --> 00:25:54.950 |
|
You just. |
|
|
|
00:25:56.400 --> 00:25:58.050 |
|
You just like share with the scatter |
|
|
|
00:25:58.050 --> 00:26:00.662 |
|
plots and you say if you had to choose |
|
|
|
00:26:00.662 --> 00:26:02.410 |
|
two features like what 2 features would |
|
|
|
00:26:02.410 --> 00:26:03.800 |
|
you choose based on looking at some of |
|
|
|
00:26:03.800 --> 00:26:04.420 |
|
the scatterplot? |
|
|
|
00:26:05.390 --> 00:26:06.890 |
|
It's not like there's not like |
|
|
|
00:26:06.890 --> 00:26:08.800 |
|
necessarily a single right answer to if |
|
|
|
00:26:08.800 --> 00:26:09.542 |
|
it makes sense. |
|
|
|
00:26:09.542 --> 00:26:11.490 |
|
If your answer just makes if you try |
|
|
|
00:26:11.490 --> 00:26:13.404 |
|
out some different combinations and |
|
|
|
00:26:13.404 --> 00:26:14.760 |
|
your answer makes sense given what you |
|
|
|
00:26:14.760 --> 00:26:15.510 |
|
tried, that's fine. |
|
|
|
00:26:15.510 --> 00:26:16.990 |
|
It's not like that you have to find the |
|
|
|
00:26:16.990 --> 00:26:19.080 |
|
very best answer by trying all pairs or |
|
|
|
00:26:19.080 --> 00:26:19.600 |
|
anything like. |
|
|
|
00:26:20.980 --> 00:26:23.840 |
|
So it's more of an exercise than like |
|
|
|
00:26:23.840 --> 00:26:25.310 |
|
right or wrong kind of thing. |
|
|
|
00:26:26.090 --> 00:26:26.610 |
|
|
|
|
|
00:26:27.280 --> 00:26:29.460 |
|
And in this Starter code the. |
|
|
|
00:26:30.240 --> 00:26:30.820 |
|
|
|
|
|
00:26:31.830 --> 00:26:34.460 |
|
We provide an example so you just can |
|
|
|
00:26:34.460 --> 00:26:37.130 |
|
run this scatterplot code with |
|
|
|
00:26:37.130 --> 00:26:39.330 |
|
different combinations of features. |
|
|
|
00:26:43.910 --> 00:26:45.530 |
|
Alright and then. |
|
|
|
00:26:48.400 --> 00:26:50.830 |
|
The second part is to use a decision |
|
|
|
00:26:50.830 --> 00:26:51.140 |
|
tree. |
|
|
|
00:26:51.140 --> 00:26:53.910 |
|
If you train a decision tree and |
|
|
|
00:26:53.910 --> 00:26:57.480 |
|
visualize it on the Features, then |
|
|
|
00:26:57.480 --> 00:27:00.230 |
|
you'll be able to see a tree structure |
|
|
|
00:27:00.230 --> 00:27:00.410 |
|
that. |
|
|
|
00:27:01.260 --> 00:27:02.970 |
|
That kind of shows you like. |
|
|
|
00:27:02.970 --> 00:27:04.580 |
|
You can think of that tree in terms of |
|
|
|
00:27:04.580 --> 00:27:05.280 |
|
different rules. |
|
|
|
00:27:05.280 --> 00:27:07.530 |
|
If you follow the branches down, each |
|
|
|
00:27:07.530 --> 00:27:09.885 |
|
like path through the tree is a set of |
|
|
|
00:27:09.885 --> 00:27:10.140 |
|
rules. |
|
|
|
00:27:10.900 --> 00:27:12.860 |
|
And there are different Rule |
|
|
|
00:27:12.860 --> 00:27:15.230 |
|
combinations that can almost perfectly |
|
|
|
00:27:15.230 --> 00:27:17.373 |
|
distinguish Gentius from all the other |
|
|
|
00:27:17.373 --> 00:27:18.940 |
|
species from the other two Species. |
|
|
|
00:27:20.180 --> 00:27:22.830 |
|
So just train the tree and visualize |
|
|
|
00:27:22.830 --> 00:27:23.920 |
|
and as a stretch goal. |
|
|
|
00:27:23.920 --> 00:27:25.560 |
|
You can find a different rule, for |
|
|
|
00:27:25.560 --> 00:27:27.180 |
|
example by eliminating some feature |
|
|
|
00:27:27.180 --> 00:27:29.460 |
|
that was used in the first rule or by |
|
|
|
00:27:29.460 --> 00:27:32.003 |
|
using a different criterion for the |
|
|
|
00:27:32.003 --> 00:27:32.870 |
|
tree Learning. |
|
|
|
00:27:35.620 --> 00:27:37.210 |
|
Then you include the rule in your |
|
|
|
00:27:37.210 --> 00:27:37.610 |
|
report. |
|
|
|
00:27:37.610 --> 00:27:38.780 |
|
So it should be something. |
|
|
|
00:27:38.780 --> 00:27:40.910 |
|
If A is greater than five and B is less |
|
|
|
00:27:40.910 --> 00:27:42.300 |
|
than two, then it's a Gen. |
|
|
|
00:27:42.300 --> 00:27:43.380 |
|
2, otherwise it's not. |
|
|
|
00:27:46.700 --> 00:27:47.040 |
|
Name. |
|
|
|
00:27:48.400 --> 00:27:50.370 |
|
And then finally design an MLP model to |
|
|
|
00:27:50.370 --> 00:27:51.560 |
|
maximize your accuracy. |
|
|
|
00:27:52.190 --> 00:27:54.000 |
|
This is not actually. |
|
|
|
00:27:55.080 --> 00:27:56.750 |
|
Again, you don't have to program it, |
|
|
|
00:27:56.750 --> 00:27:57.390 |
|
you just. |
|
|
|
00:27:57.390 --> 00:27:59.340 |
|
This is actually kind of like. |
|
|
|
00:28:01.580 --> 00:28:03.150 |
|
Almost like, ridiculously easy. |
|
|
|
00:28:03.830 --> 00:28:06.020 |
|
You can just call your different. |
|
|
|
00:28:06.020 --> 00:28:08.560 |
|
We've learned a bunch of models, for |
|
|
|
00:28:08.560 --> 00:28:10.840 |
|
example these models up here. |
|
|
|
00:28:11.500 --> 00:28:13.600 |
|
You can try these different models that |
|
|
|
00:28:13.600 --> 00:28:15.820 |
|
we used in this experiment, as well as |
|
|
|
00:28:15.820 --> 00:28:17.600 |
|
any other models that you think might |
|
|
|
00:28:17.600 --> 00:28:20.840 |
|
be applicable except for. |
|
|
|
00:28:20.840 --> 00:28:21.730 |
|
Just make sure you're using |
|
|
|
00:28:21.730 --> 00:28:23.126 |
|
Classification models and not |
|
|
|
00:28:23.126 --> 00:28:23.759 |
|
regression models. |
|
|
|
00:28:23.760 --> 00:28:25.820 |
|
But you can try logistic regression or |
|
|
|
00:28:25.820 --> 00:28:27.480 |
|
random forests or trees. |
|
|
|
00:28:28.550 --> 00:28:31.130 |
|
And when you instantiate the Model, |
|
|
|
00:28:31.130 --> 00:28:32.069 |
|
just define the model. |
|
|
|
00:28:32.070 --> 00:28:34.180 |
|
Here for example, logistic model equals |
|
|
|
00:28:34.180 --> 00:28:37.820 |
|
logistic regression empty, empty prin. |
|
|
|
00:28:38.910 --> 00:28:40.365 |
|
And then if you put the Model in here |
|
|
|
00:28:40.365 --> 00:28:42.700 |
|
and your data, this will do the cross |
|
|
|
00:28:42.700 --> 00:28:44.190 |
|
validation for you and compute the |
|
|
|
00:28:44.190 --> 00:28:44.660 |
|
score. |
|
|
|
00:28:44.660 --> 00:28:46.255 |
|
So it really just try different models |
|
|
|
00:28:46.255 --> 00:28:49.540 |
|
and see what works well and I found |
|
|
|
00:28:49.540 --> 00:28:52.830 |
|
pretty quickly a model that was 99.5% |
|
|
|
00:28:52.830 --> 00:28:53.230 |
|
accurate. |
|
|
|
00:28:53.900 --> 00:28:54.120 |
|
So. |
|
|
|
00:28:55.410 --> 00:28:56.690 |
|
So again, it's just like a little bit |
|
|
|
00:28:56.690 --> 00:28:58.600 |
|
of a simple model testing. |
|
|
|
00:28:58.870 --> 00:28:59.050 |
|
OK. |
|
|
|
00:29:00.310 --> 00:29:00.710 |
|
Experiment. |
|
|
|
00:29:01.560 --> 00:29:04.135 |
|
So that's the main part of homework 2. |
|
|
|
00:29:04.135 --> 00:29:06.710 |
|
The stretch goals to further improve |
|
|
|
00:29:06.710 --> 00:29:09.190 |
|
MNIST by improving the design of your |
|
|
|
00:29:09.190 --> 00:29:09.630 |
|
network. |
|
|
|
00:29:11.320 --> 00:29:13.310 |
|
Find a second rule which I mentioned in |
|
|
|
00:29:13.310 --> 00:29:15.000 |
|
the positional encoding. |
|
|
|
00:29:15.000 --> 00:29:18.660 |
|
So this is the like Multi layer network |
|
|
|
00:29:18.660 --> 00:29:20.390 |
|
for predicting color given position. |
|
|
|
00:29:22.460 --> 00:29:24.560 |
|
And it should be possible to get the |
|
|
|
00:29:24.560 --> 00:29:26.450 |
|
full points, easing the positional |
|
|
|
00:29:26.450 --> 00:29:26.860 |
|
encoding. |
|
|
|
00:29:26.860 --> 00:29:28.210 |
|
You should be able to generate like a |
|
|
|
00:29:28.210 --> 00:29:30.440 |
|
fairly natural looking image should |
|
|
|
00:29:30.440 --> 00:29:30.770 |
|
look. |
|
|
|
00:29:30.770 --> 00:29:32.250 |
|
It might not be quite as sharp as the |
|
|
|
00:29:32.250 --> 00:29:33.540 |
|
original, but it should be pretty good. |
|
|
|
00:29:37.670 --> 00:29:39.180 |
|
OK, one more question. |
|
|
|
00:29:50.410 --> 00:29:54.500 |
|
Yeah, maybes and cannon are two |
|
|
|
00:29:54.500 --> 00:29:56.040 |
|
examples of Classification algorithms. |
|
|
|
00:29:56.720 --> 00:30:00.500 |
|
And night Bayes is not usually the best |
|
|
|
00:30:00.500 --> 00:30:00.990 |
|
so. |
|
|
|
00:30:02.960 --> 00:30:04.230 |
|
Not the first thing I would try. |
|
|
|
00:30:06.260 --> 00:30:09.630 |
|
So random forest, decision trees, SVMS, |
|
|
|
00:30:09.630 --> 00:30:11.090 |
|
naibs, logistic regression. |
|
|
|
00:30:11.860 --> 00:30:12.870 |
|
All of those can apply. |
|
|
|
00:30:15.190 --> 00:30:18.430 |
|
So that was a little bit, it took some |
|
|
|
00:30:18.430 --> 00:30:19.580 |
|
time, but that's OK. |
|
|
|
00:30:21.230 --> 00:30:22.740 |
|
That was one of the things that a lot |
|
|
|
00:30:22.740 --> 00:30:24.550 |
|
of students wanted was, or at least I |
|
|
|
00:30:24.550 --> 00:30:26.720 |
|
think that they said they wanted, is |
|
|
|
00:30:26.720 --> 00:30:29.090 |
|
like to talk like a little bit more in |
|
|
|
00:30:29.090 --> 00:30:30.360 |
|
depth about the homework and try to |
|
|
|
00:30:30.360 --> 00:30:32.420 |
|
explain like what we're trying to ask |
|
|
|
00:30:32.420 --> 00:30:32.710 |
|
for. |
|
|
|
00:30:32.710 --> 00:30:34.330 |
|
So hopefully that does help a little |
|
|
|
00:30:34.330 --> 00:30:34.480 |
|
bit. |
|
|
|
00:30:36.390 --> 00:30:38.485 |
|
Alright, so now we can move on to Deep |
|
|
|
00:30:38.485 --> 00:30:40.020 |
|
Learning, which is a pretty exciting |
|
|
|
00:30:40.020 --> 00:30:41.180 |
|
topic. |
|
|
|
00:30:41.180 --> 00:30:42.545 |
|
I'm sure everyone's heard of Deep |
|
|
|
00:30:42.545 --> 00:30:42.810 |
|
Learning. |
|
|
|
00:30:43.950 --> 00:30:45.470 |
|
So I'm going to tell the story of how |
|
|
|
00:30:45.470 --> 00:30:47.580 |
|
Deep Learning became so important, and |
|
|
|
00:30:47.580 --> 00:30:48.650 |
|
then I'm going to talk about the |
|
|
|
00:30:48.650 --> 00:30:49.440 |
|
Optimizers. |
|
|
|
00:30:49.440 --> 00:30:51.460 |
|
So going beyond the Vanilla SGD. |
|
|
|
00:30:52.130 --> 00:30:55.940 |
|
And get into Residual Networks, which |
|
|
|
00:30:55.940 --> 00:30:59.210 |
|
is one of the mainstays and. |
|
|
|
00:31:00.160 --> 00:31:01.730 |
|
I'm kind of like conscious that I'm a |
|
|
|
00:31:01.730 --> 00:31:03.190 |
|
computer vision researcher, so I was |
|
|
|
00:31:03.190 --> 00:31:03.940 |
|
like, am I? |
|
|
|
00:31:05.520 --> 00:31:07.543 |
|
For Deep Learning, do I just focus on |
|
|
|
00:31:07.543 --> 00:31:09.280 |
|
like I don't want to just focus on the |
|
|
|
00:31:09.280 --> 00:31:11.639 |
|
vision Networks if there were like |
|
|
|
00:31:11.640 --> 00:31:12.935 |
|
other things that were important for |
|
|
|
00:31:12.935 --> 00:31:14.020 |
|
the development of Deep Learning? |
|
|
|
00:31:14.640 --> 00:31:16.090 |
|
But when I looked into it, I realized |
|
|
|
00:31:16.090 --> 00:31:17.930 |
|
that vision was like the breakthrough |
|
|
|
00:31:17.930 --> 00:31:18.560 |
|
in Deep Learning. |
|
|
|
00:31:18.560 --> 00:31:21.496 |
|
So the first big algorithms for Deep |
|
|
|
00:31:21.496 --> 00:31:24.060 |
|
Learning were like as you'll see, based |
|
|
|
00:31:24.060 --> 00:31:26.149 |
|
on ImageNet and Image image based |
|
|
|
00:31:26.150 --> 00:31:26.880 |
|
classifiers. |
|
|
|
00:31:27.990 --> 00:31:29.160 |
|
And then it's huge. |
|
|
|
00:31:29.160 --> 00:31:32.870 |
|
Impact on NLP came a little bit later, |
|
|
|
00:31:32.870 --> 00:31:35.203 |
|
but mainly Deep Learning makes its |
|
|
|
00:31:35.203 --> 00:31:37.200 |
|
impact on structured data, where you |
|
|
|
00:31:37.200 --> 00:31:39.660 |
|
have things like images and text, where |
|
|
|
00:31:39.660 --> 00:31:41.880 |
|
relationships between the different |
|
|
|
00:31:41.880 --> 00:31:43.720 |
|
elements that are fed into the network |
|
|
|
00:31:43.720 --> 00:31:46.005 |
|
need to be Learned, where you're trying |
|
|
|
00:31:46.005 --> 00:31:47.540 |
|
to learn patterns of these elements. |
|
|
|
00:31:51.310 --> 00:31:53.050 |
|
Alright, so Deep Learning starts with |
|
|
|
00:31:53.050 --> 00:31:55.260 |
|
the Perceptron, which we already talked |
|
|
|
00:31:55.260 --> 00:31:55.480 |
|
about. |
|
|
|
00:31:55.480 --> 00:31:58.470 |
|
This was proposed by Rosenblatt 1958. |
|
|
|
00:31:59.850 --> 00:32:03.480 |
|
And you won't be let me read some of |
|
|
|
00:32:03.480 --> 00:32:04.030 |
|
this out loud. |
|
|
|
00:32:04.030 --> 00:32:06.150 |
|
So here's in 1958 New York Times |
|
|
|
00:32:06.150 --> 00:32:07.580 |
|
article about the Perceptron. |
|
|
|
00:32:08.310 --> 00:32:11.210 |
|
Called New Navy device learns by doing. |
|
|
|
00:32:12.000 --> 00:32:14.720 |
|
Psychologist shows Embryo of computer |
|
|
|
00:32:14.720 --> 00:32:16.670 |
|
designed to read and grow Wiser. |
|
|
|
00:32:18.050 --> 00:32:20.510 |
|
There's the Navy revealed, revealed the |
|
|
|
00:32:20.510 --> 00:32:22.350 |
|
embryo of an electronic computer today |
|
|
|
00:32:22.350 --> 00:32:23.950 |
|
that expects we'll be able to walk, |
|
|
|
00:32:23.950 --> 00:32:25.810 |
|
talk, see right and reproduce itself |
|
|
|
00:32:25.810 --> 00:32:28.220 |
|
and be conscious of its existence. |
|
|
|
00:32:28.980 --> 00:32:30.750 |
|
The Embryo, the Weather Bureau is |
|
|
|
00:32:30.750 --> 00:32:33.630 |
|
$2,000,000 seven 104 Computer learn to |
|
|
|
00:32:33.630 --> 00:32:35.530 |
|
differentiate between right and left |
|
|
|
00:32:35.530 --> 00:32:37.419 |
|
after 50 attempts in the Navy's |
|
|
|
00:32:37.420 --> 00:32:38.770 |
|
demonstration for newsmen. |
|
|
|
00:32:39.730 --> 00:32:40.270 |
|
This. |
|
|
|
00:32:41.040 --> 00:32:43.830 |
|
I don't know why it took 50 attempts. |
|
|
|
00:32:43.830 --> 00:32:45.520 |
|
There's only two answers. |
|
|
|
00:32:46.240 --> 00:32:48.970 |
|
But the service said it would use this |
|
|
|
00:32:48.970 --> 00:32:51.630 |
|
principle to build the first of its |
|
|
|
00:32:51.630 --> 00:32:53.535 |
|
Perceptron thinking machines that we'll |
|
|
|
00:32:53.535 --> 00:32:54.670 |
|
be able to read and write. |
|
|
|
00:32:54.670 --> 00:32:56.570 |
|
It is expected to be finished in about |
|
|
|
00:32:56.570 --> 00:32:58.920 |
|
a year at a cost of $100,000. |
|
|
|
00:33:01.970 --> 00:33:02.605 |
|
So going on. |
|
|
|
00:33:02.605 --> 00:33:04.860 |
|
So they're pretty underestimated. |
|
|
|
00:33:04.860 --> 00:33:06.880 |
|
The complexity of artificial |
|
|
|
00:33:06.880 --> 00:33:09.133 |
|
intelligence obviously is like we have |
|
|
|
00:33:09.133 --> 00:33:10.670 |
|
the, we have the Perceptron, we'll be |
|
|
|
00:33:10.670 --> 00:33:11.800 |
|
done next year with the. |
|
|
|
00:33:12.700 --> 00:33:13.240 |
|
And. |
|
|
|
00:33:15.620 --> 00:33:17.460 |
|
They did, though, get some of the |
|
|
|
00:33:17.460 --> 00:33:18.023 |
|
impact right. |
|
|
|
00:33:18.023 --> 00:33:20.155 |
|
So they said the brain is designed to |
|
|
|
00:33:20.155 --> 00:33:21.940 |
|
remember images and information as |
|
|
|
00:33:21.940 --> 00:33:22.930 |
|
perceived itself. |
|
|
|
00:33:22.930 --> 00:33:24.540 |
|
Ordinary computers remember only what |
|
|
|
00:33:24.540 --> 00:33:26.220 |
|
has fed into them on punch cards or |
|
|
|
00:33:26.220 --> 00:33:28.220 |
|
magnetic tape, so the information is |
|
|
|
00:33:28.220 --> 00:33:29.210 |
|
stored in the weights of the network. |
|
|
|
00:33:30.090 --> 00:33:31.650 |
|
Later Perceptrons will be able to |
|
|
|
00:33:31.650 --> 00:33:33.300 |
|
recognize people and call out their |
|
|
|
00:33:33.300 --> 00:33:35.589 |
|
names and instantly translate speech in |
|
|
|
00:33:35.590 --> 00:33:37.860 |
|
one language to speech or writing in |
|
|
|
00:33:37.860 --> 00:33:39.580 |
|
another language, it was predicted. |
|
|
|
00:33:40.180 --> 00:33:44.110 |
|
So it took 70 years, but it happened. |
|
|
|
00:33:46.150 --> 00:33:50.130 |
|
So it's at least shows some insight |
|
|
|
00:33:50.130 --> 00:33:51.780 |
|
into like what this what this |
|
|
|
00:33:51.780 --> 00:33:53.900 |
|
technology could become. |
|
|
|
00:33:54.880 --> 00:33:56.430 |
|
So it's a pretty interesting article. |
|
|
|
00:33:58.120 --> 00:34:01.000 |
|
So from the Perceptron we eventually |
|
|
|
00:34:01.000 --> 00:34:03.120 |
|
went to a two layer, two layer neural |
|
|
|
00:34:03.120 --> 00:34:03.550 |
|
network. |
|
|
|
00:34:03.550 --> 00:34:05.170 |
|
I think that didn't happen until the |
|
|
|
00:34:05.170 --> 00:34:06.220 |
|
early 80s. |
|
|
|
00:34:06.700 --> 00:34:07.260 |
|
|
|
|
|
00:34:08.120 --> 00:34:09.440 |
|
And these are more difficult to |
|
|
|
00:34:09.440 --> 00:34:11.910 |
|
optimize the big thing that's, I mean |
|
|
|
00:34:11.910 --> 00:34:14.147 |
|
if you think about it before the 80s |
|
|
|
00:34:14.147 --> 00:34:16.420 |
|
you couldn't even like store digital |
|
|
|
00:34:16.420 --> 00:34:17.720 |
|
data in any quantities. |
|
|
|
00:34:17.720 --> 00:34:19.320 |
|
So it's really hard to do things like. |
|
|
|
00:34:20.350 --> 00:34:22.515 |
|
Multi layer Networks or machine |
|
|
|
00:34:22.515 --> 00:34:23.410 |
|
learning. |
|
|
|
00:34:23.410 --> 00:34:25.162 |
|
So that's kind of why like the machine |
|
|
|
00:34:25.162 --> 00:34:27.520 |
|
learning in 1958 was a huge deal, even |
|
|
|
00:34:27.520 --> 00:34:28.830 |
|
if it's in a very limited form. |
|
|
|
00:34:31.000 --> 00:34:33.023 |
|
And then with these nonlinearities you |
|
|
|
00:34:33.023 --> 00:34:34.550 |
|
can then learn nonlinear functions, |
|
|
|
00:34:34.550 --> 00:34:36.220 |
|
while Perceptrons are limited to linear |
|
|
|
00:34:36.220 --> 00:34:36.740 |
|
linear functions. |
|
|
|
00:34:36.740 --> 00:34:38.520 |
|
And then you can have Multi layer |
|
|
|
00:34:38.520 --> 00:34:40.390 |
|
neural networks where you just have |
|
|
|
00:34:40.390 --> 00:34:41.130 |
|
more layers. |
|
|
|
00:34:42.480 --> 00:34:43.780 |
|
And we talked about how you can |
|
|
|
00:34:43.780 --> 00:34:46.550 |
|
optimize these Networks using a form of |
|
|
|
00:34:46.550 --> 00:34:47.520 |
|
Gradient Descent. |
|
|
|
00:34:48.760 --> 00:34:50.280 |
|
And in particular you do back |
|
|
|
00:34:50.280 --> 00:34:52.270 |
|
propagation where you allow the |
|
|
|
00:34:52.270 --> 00:34:54.434 |
|
Gradients or like how you should change |
|
|
|
00:34:54.434 --> 00:34:57.710 |
|
the error the Gradients are based on, |
|
|
|
00:34:57.710 --> 00:34:59.642 |
|
like how the weights affect the error |
|
|
|
00:34:59.642 --> 00:35:01.570 |
|
and that can be propagated back through |
|
|
|
00:35:01.570 --> 00:35:02.130 |
|
the network. |
|
|
|
00:35:02.970 --> 00:35:03.520 |
|
|
|
|
|
00:35:04.430 --> 00:35:06.920 |
|
And then you can optimize using |
|
|
|
00:35:06.920 --> 00:35:08.890 |
|
stochastic gradient descent, where you |
|
|
|
00:35:08.890 --> 00:35:10.640 |
|
find the best Update based on a small |
|
|
|
00:35:10.640 --> 00:35:11.790 |
|
amount of data at a time. |
|
|
|
00:35:14.670 --> 00:35:18.240 |
|
So now to get to the next Phase I need |
|
|
|
00:35:18.240 --> 00:35:21.085 |
|
to get into MLP's applied to images. |
|
|
|
00:35:21.085 --> 00:35:23.180 |
|
So I want to just very briefly tell you |
|
|
|
00:35:23.180 --> 00:35:24.300 |
|
a little bit about images. |
|
|
|
00:35:25.480 --> 00:35:27.730 |
|
So images, if you have an intensity |
|
|
|
00:35:27.730 --> 00:35:29.842 |
|
image like what we saw for MNIST, then |
|
|
|
00:35:29.842 --> 00:35:32.140 |
|
you have then the image is a matrix. |
|
|
|
00:35:32.860 --> 00:35:35.550 |
|
So the rows will be the Y position, |
|
|
|
00:35:35.550 --> 00:35:36.942 |
|
there will be the rows of the image, |
|
|
|
00:35:36.942 --> 00:35:38.417 |
|
the columns or the columns of the image |
|
|
|
00:35:38.417 --> 00:35:40.440 |
|
and the values range from zero to 1, |
|
|
|
00:35:40.440 --> 00:35:43.235 |
|
where usually like one is as bright and |
|
|
|
00:35:43.235 --> 00:35:44.140 |
|
zero is dark. |
|
|
|
00:35:47.410 --> 00:35:49.100 |
|
If you have a color image, then you |
|
|
|
00:35:49.100 --> 00:35:50.769 |
|
have three of these matrices, one for |
|
|
|
00:35:50.770 --> 00:35:54.080 |
|
each color channel, and the standard |
|
|
|
00:35:54.080 --> 00:35:55.760 |
|
way it's stored is in RGB. |
|
|
|
00:35:55.760 --> 00:35:57.100 |
|
So you have one for the red, one for |
|
|
|
00:35:57.100 --> 00:35:58.490 |
|
the green, one for the blue. |
|
|
|
00:36:01.760 --> 00:36:05.200 |
|
And so in Python, an image in RGB image |
|
|
|
00:36:05.200 --> 00:36:07.310 |
|
is stored as a 3 dimensional matrix. |
|
|
|
00:36:08.560 --> 00:36:11.440 |
|
Where for example, the upper left |
|
|
|
00:36:11.440 --> 00:36:14.983 |
|
corner of it, 000 is the red value of |
|
|
|
00:36:14.983 --> 00:36:16.360 |
|
the top left pixel. |
|
|
|
00:36:17.590 --> 00:36:21.010 |
|
Yaxe in general is the. |
|
|
|
00:36:21.430 --> 00:36:23.920 |
|
Is the Cth color, so C can be zero, one |
|
|
|
00:36:23.920 --> 00:36:25.290 |
|
or two for red, green or blue. |
|
|
|
00:36:26.320 --> 00:36:29.390 |
|
The Wyeth row and the XTH column, so |
|
|
|
00:36:29.390 --> 00:36:31.780 |
|
it's a color of a particular pixel. |
|
|
|
00:36:32.670 --> 00:36:34.990 |
|
So that's how images are stored. |
|
|
|
00:36:35.800 --> 00:36:38.680 |
|
In computers, if you read it will be a |
|
|
|
00:36:38.680 --> 00:36:40.490 |
|
3D matrix if it's a color image. |
|
|
|
00:36:44.730 --> 00:36:47.780 |
|
So the wait. |
|
|
|
00:36:47.780 --> 00:36:48.890 |
|
Did I miss something? |
|
|
|
00:36:48.890 --> 00:36:51.705 |
|
Yes, I meant to talk about this first. |
|
|
|
00:36:51.705 --> 00:36:53.592 |
|
So when you're analyzing images. |
|
|
|
00:36:53.592 --> 00:36:56.450 |
|
So in the MNIST problem, we just like |
|
|
|
00:36:56.450 --> 00:36:58.265 |
|
turn the image into a column vector so |
|
|
|
00:36:58.265 --> 00:36:59.995 |
|
that we can apply a linear classifier |
|
|
|
00:36:59.995 --> 00:37:00.660 |
|
to it. |
|
|
|
00:37:00.660 --> 00:37:02.900 |
|
In that case, like there's no longer |
|
|
|
00:37:02.900 --> 00:37:05.823 |
|
any positional structure stored in the |
|
|
|
00:37:05.823 --> 00:37:09.920 |
|
vector, and the logistic regressor KNN |
|
|
|
00:37:09.920 --> 00:37:11.620 |
|
doesn't really care whether pixels were |
|
|
|
00:37:11.620 --> 00:37:12.710 |
|
next to each other or not. |
|
|
|
00:37:12.710 --> 00:37:14.280 |
|
It's just like treating them as like |
|
|
|
00:37:14.280 --> 00:37:15.040 |
|
separate. |
|
|
|
00:37:15.420 --> 00:37:17.630 |
|
Individual Input values that it's going |
|
|
|
00:37:17.630 --> 00:37:19.520 |
|
to use to determine similarity or make |
|
|
|
00:37:19.520 --> 00:37:20.350 |
|
some Prediction. |
|
|
|
00:37:21.300 --> 00:37:24.121 |
|
But we can do much better analysis of |
|
|
|
00:37:24.121 --> 00:37:26.255 |
|
images if we take into account that |
|
|
|
00:37:26.255 --> 00:37:28.130 |
|
like local patterns and the images are |
|
|
|
00:37:28.130 --> 00:37:28.760 |
|
important. |
|
|
|
00:37:28.760 --> 00:37:31.260 |
|
So by like trying to find edges or |
|
|
|
00:37:31.260 --> 00:37:33.040 |
|
finding patterns like things that look |
|
|
|
00:37:33.040 --> 00:37:36.043 |
|
like eyes or faces, we can do much |
|
|
|
00:37:36.043 --> 00:37:38.060 |
|
better analysis than if we just like |
|
|
|
00:37:38.060 --> 00:37:39.680 |
|
treat it as a big long vector of |
|
|
|
00:37:39.680 --> 00:37:40.140 |
|
values. |
|
|
|
00:37:42.690 --> 00:37:44.139 |
|
So if you. |
|
|
|
00:37:45.030 --> 00:37:46.770 |
|
One of the common ways of processing |
|
|
|
00:37:46.770 --> 00:37:50.480 |
|
images is that you apply some. |
|
|
|
00:37:50.610 --> 00:37:51.170 |
|
|
|
|
|
00:37:52.010 --> 00:37:54.800 |
|
You apply some weights. |
|
|
|
00:37:55.470 --> 00:37:57.930 |
|
To like different little patches in the |
|
|
|
00:37:57.930 --> 00:37:59.775 |
|
image and you take up dot product of |
|
|
|
00:37:59.775 --> 00:38:00.780 |
|
the weights with the patch. |
|
|
|
00:38:01.440 --> 00:38:03.130 |
|
So a simple example is that you could |
|
|
|
00:38:03.130 --> 00:38:06.150 |
|
take the value of a pixel in the center |
|
|
|
00:38:06.150 --> 00:38:08.510 |
|
minus the value of the pixel to the |
|
|
|
00:38:08.510 --> 00:38:10.439 |
|
left minus the value of the pixel to |
|
|
|
00:38:10.440 --> 00:38:11.760 |
|
its right, and that would tell you if |
|
|
|
00:38:11.760 --> 00:38:13.700 |
|
there's an edge at that position. |
|
|
|
00:38:16.620 --> 00:38:17.070 |
|
Right. |
|
|
|
00:38:19.290 --> 00:38:19.760 |
|
So. |
|
|
|
00:38:20.730 --> 00:38:22.766 |
|
When we represented again when we |
|
|
|
00:38:22.766 --> 00:38:25.590 |
|
represented these Networks in MLPS, I |
|
|
|
00:38:25.590 --> 00:38:28.401 |
|
mean when we represented these Networks |
|
|
|
00:38:28.401 --> 00:38:31.870 |
|
in homework one and homework two in |
|
|
|
00:38:31.870 --> 00:38:32.230 |
|
fact. |
|
|
|
00:38:33.100 --> 00:38:36.050 |
|
We just represent the digits as like a |
|
|
|
00:38:36.050 --> 00:38:38.520 |
|
long vector values as I said, and in |
|
|
|
00:38:38.520 --> 00:38:40.090 |
|
that case we would have like these |
|
|
|
00:38:40.090 --> 00:38:41.340 |
|
Fully connected layers. |
|
|
|
00:38:41.990 --> 00:38:44.060 |
|
Where we have a set of weights for each |
|
|
|
00:38:44.060 --> 00:38:45.100 |
|
intermediate Output. |
|
|
|
00:38:45.100 --> 00:38:46.552 |
|
That's just like a linear prediction |
|
|
|
00:38:46.552 --> 00:38:48.640 |
|
from the from all of the inputs. |
|
|
|
00:38:48.640 --> 00:38:50.520 |
|
So this is not yet taking into account |
|
|
|
00:38:50.520 --> 00:38:51.660 |
|
the structure of the image. |
|
|
|
00:38:53.730 --> 00:38:56.970 |
|
Could I take into account the to do |
|
|
|
00:38:56.970 --> 00:38:58.500 |
|
something more like filtering where we |
|
|
|
00:38:58.500 --> 00:39:00.733 |
|
want to try to take advantage of that |
|
|
|
00:39:00.733 --> 00:39:02.460 |
|
the image is composed of different |
|
|
|
00:39:02.460 --> 00:39:04.260 |
|
patches that are kind of like locally |
|
|
|
00:39:04.260 --> 00:39:06.530 |
|
meaningful or the relative values of |
|
|
|
00:39:06.530 --> 00:39:07.870 |
|
nearby pixels are important? |
|
|
|
00:39:08.680 --> 00:39:11.060 |
|
We can do what's called a Convolutional |
|
|
|
00:39:11.060 --> 00:39:11.560 |
|
network. |
|
|
|
00:39:12.860 --> 00:39:14.460 |
|
They're in a Convolutional network. |
|
|
|
00:39:15.510 --> 00:39:18.060 |
|
Your weights are just analyzing a local |
|
|
|
00:39:18.060 --> 00:39:19.585 |
|
neighborhood of the image, and by |
|
|
|
00:39:19.585 --> 00:39:21.000 |
|
analyzing I just mean a dot product. |
|
|
|
00:39:21.000 --> 00:39:23.489 |
|
So it's just a linear product, a linear |
|
|
|
00:39:23.490 --> 00:39:25.986 |
|
combination of the pixel values in a |
|
|
|
00:39:25.986 --> 00:39:28.521 |
|
local portion of the image, like a 7 by |
|
|
|
00:39:28.521 --> 00:39:31.400 |
|
7, seven pixel by 7 pixel image patch. |
|
|
|
00:39:33.170 --> 00:39:37.200 |
|
And if you scan like if you scan that |
|
|
|
00:39:37.200 --> 00:39:39.290 |
|
patch or scan the weights across the |
|
|
|
00:39:39.290 --> 00:39:42.630 |
|
image, you can then extract features or |
|
|
|
00:39:42.630 --> 00:39:44.975 |
|
feature for every position in the |
|
|
|
00:39:44.975 --> 00:39:45.310 |
|
Image. |
|
|
|
00:39:48.700 --> 00:39:50.200 |
|
And these weights can be learned if |
|
|
|
00:39:50.200 --> 00:39:51.670 |
|
you're using a network. |
|
|
|
00:39:52.780 --> 00:39:54.880 |
|
And so for a given set of weights, you |
|
|
|
00:39:54.880 --> 00:39:56.725 |
|
get what's called a feature map. |
|
|
|
00:39:56.725 --> 00:39:58.075 |
|
So this could be representing whether |
|
|
|
00:39:58.075 --> 00:39:59.948 |
|
there's a vertical edge at each |
|
|
|
00:39:59.948 --> 00:40:02.050 |
|
position, or horizontal edge at each |
|
|
|
00:40:02.050 --> 00:40:03.490 |
|
position, or whether there's like a |
|
|
|
00:40:03.490 --> 00:40:04.930 |
|
dark patch in the middle of a bright |
|
|
|
00:40:04.930 --> 00:40:06.200 |
|
area, something like that. |
|
|
|
00:40:08.690 --> 00:40:10.380 |
|
And if you have a bunch of these sets |
|
|
|
00:40:10.380 --> 00:40:11.940 |
|
of Learned weights, then you can |
|
|
|
00:40:11.940 --> 00:40:14.180 |
|
generate a bunch of feature maps, so |
|
|
|
00:40:14.180 --> 00:40:15.490 |
|
they're just representing different |
|
|
|
00:40:15.490 --> 00:40:16.940 |
|
things about the edges or local |
|
|
|
00:40:16.940 --> 00:40:18.110 |
|
patterns in the Image. |
|
|
|
00:40:21.010 --> 00:40:22.025 |
|
Here's an example. |
|
|
|
00:40:22.025 --> 00:40:24.960 |
|
So let's say we have this edge filter |
|
|
|
00:40:24.960 --> 00:40:25.205 |
|
here. |
|
|
|
00:40:25.205 --> 00:40:26.820 |
|
So it's just saying like is there |
|
|
|
00:40:26.820 --> 00:40:28.520 |
|
looking for diagonal edges. |
|
|
|
00:40:28.520 --> 00:40:30.625 |
|
Essentially whether they're the sum of |
|
|
|
00:40:30.625 --> 00:40:32.200 |
|
values in the upper right is greater |
|
|
|
00:40:32.200 --> 00:40:33.460 |
|
than the sum of values in the lower |
|
|
|
00:40:33.460 --> 00:40:33.710 |
|
left. |
|
|
|
00:40:34.820 --> 00:40:36.390 |
|
Kind of like scan that across the |
|
|
|
00:40:36.390 --> 00:40:36.640 |
|
image. |
|
|
|
00:40:36.640 --> 00:40:38.370 |
|
So for each Image position you take the |
|
|
|
00:40:38.370 --> 00:40:39.850 |
|
dot product of these weights with the |
|
|
|
00:40:39.850 --> 00:40:40.520 |
|
image pixels. |
|
|
|
00:40:41.720 --> 00:40:43.220 |
|
And then that gives you some feature |
|
|
|
00:40:43.220 --> 00:40:43.890 |
|
map. |
|
|
|
00:40:43.890 --> 00:40:46.160 |
|
So here like dark and bright values |
|
|
|
00:40:46.160 --> 00:40:47.950 |
|
mean that there is like a strong edge |
|
|
|
00:40:47.950 --> 00:40:48.970 |
|
in that direction. |
|
|
|
00:40:51.200 --> 00:40:53.220 |
|
And then you can do that with other |
|
|
|
00:40:53.220 --> 00:40:55.140 |
|
filters to look for other kinds of |
|
|
|
00:40:55.140 --> 00:40:57.010 |
|
edges or patterns, and you get a bunch |
|
|
|
00:40:57.010 --> 00:40:58.960 |
|
of these feature maps and then they get |
|
|
|
00:40:58.960 --> 00:41:00.190 |
|
stacked together as your next |
|
|
|
00:41:00.190 --> 00:41:01.020 |
|
representation. |
|
|
|
00:41:02.580 --> 00:41:03.605 |
|
So then we get like. |
|
|
|
00:41:03.605 --> 00:41:05.220 |
|
The Width here is like the number of |
|
|
|
00:41:05.220 --> 00:41:05.960 |
|
feature maps. |
|
|
|
00:41:06.770 --> 00:41:08.350 |
|
Sometimes people call them channels. |
|
|
|
00:41:08.350 --> 00:41:10.317 |
|
So you start with an RGB 3 channel |
|
|
|
00:41:10.317 --> 00:41:11.803 |
|
image and then you have like a feature |
|
|
|
00:41:11.803 --> 00:41:12.489 |
|
channel Image. |
|
|
|
00:41:15.010 --> 00:41:16.680 |
|
And next you can do the same thing. |
|
|
|
00:41:16.680 --> 00:41:17.615 |
|
Now your weights. |
|
|
|
00:41:17.615 --> 00:41:19.580 |
|
Now, instead of operating on RGB |
|
|
|
00:41:19.580 --> 00:41:21.417 |
|
values, you operate on the feature |
|
|
|
00:41:21.417 --> 00:41:23.160 |
|
values, but you still analyze local |
|
|
|
00:41:23.160 --> 00:41:24.860 |
|
patches of these feature maps. |
|
|
|
00:41:25.720 --> 00:41:27.180 |
|
And produce new feature maps. |
|
|
|
00:41:29.350 --> 00:41:31.030 |
|
And that's the basic idea of a |
|
|
|
00:41:31.030 --> 00:41:32.480 |
|
Convolutional network. |
|
|
|
00:41:32.480 --> 00:41:34.670 |
|
So you start with the input image. |
|
|
|
00:41:35.630 --> 00:41:38.550 |
|
You do some Convolution using Learned |
|
|
|
00:41:38.550 --> 00:41:39.150 |
|
weights. |
|
|
|
00:41:39.150 --> 00:41:41.600 |
|
You apply some nonlinearity like a |
|
|
|
00:41:41.600 --> 00:41:42.030 |
|
ReLU. |
|
|
|
00:41:43.050 --> 00:41:45.110 |
|
And then you often do like some kind of |
|
|
|
00:41:45.110 --> 00:41:46.280 |
|
spatial pooling. |
|
|
|
00:41:47.300 --> 00:41:50.480 |
|
Which is basically if you take like 2 |
|
|
|
00:41:50.480 --> 00:41:52.390 |
|
by two groups of pixels in the image |
|
|
|
00:41:52.390 --> 00:41:54.070 |
|
and you represent the value or the Max |
|
|
|
00:41:54.070 --> 00:41:54.920 |
|
of those pixels. |
|
|
|
00:41:55.690 --> 00:41:57.371 |
|
Then you can like reduce the size of |
|
|
|
00:41:57.371 --> 00:41:59.009 |
|
the image or reduce the size of the |
|
|
|
00:41:59.010 --> 00:42:01.060 |
|
feature map and still like retain a lot |
|
|
|
00:42:01.060 --> 00:42:02.760 |
|
of the original information. |
|
|
|
00:42:03.400 --> 00:42:05.530 |
|
And so this is like the general |
|
|
|
00:42:05.530 --> 00:42:07.900 |
|
structure of convolutional neural |
|
|
|
00:42:07.900 --> 00:42:10.750 |
|
networks or CNS, that you apply a |
|
|
|
00:42:10.750 --> 00:42:13.320 |
|
filter, you apply nonlinearity, and |
|
|
|
00:42:13.320 --> 00:42:15.360 |
|
then you like downsample the image, |
|
|
|
00:42:15.360 --> 00:42:17.830 |
|
meaning you reduce its size by taking |
|
|
|
00:42:17.830 --> 00:42:20.456 |
|
averages of small blocks or maxes of |
|
|
|
00:42:20.456 --> 00:42:20.989 |
|
small blocks. |
|
|
|
00:42:23.360 --> 00:42:25.630 |
|
And you just keep repeating that until |
|
|
|
00:42:25.630 --> 00:42:28.090 |
|
you finally at the end have some linear |
|
|
|
00:42:28.090 --> 00:42:29.100 |
|
layers for Prediction. |
|
|
|
00:42:31.020 --> 00:42:33.110 |
|
So this is just again showing the basic |
|
|
|
00:42:33.110 --> 00:42:34.980 |
|
structure you do Convolution pool, so |
|
|
|
00:42:34.980 --> 00:42:37.320 |
|
it's basically convolved, downsample, |
|
|
|
00:42:37.320 --> 00:42:39.590 |
|
convolve down sample et cetera and then |
|
|
|
00:42:39.590 --> 00:42:41.710 |
|
linear layers for your final MLP |
|
|
|
00:42:41.710 --> 00:42:42.230 |
|
Prediction. |
|
|
|
00:42:48.040 --> 00:42:48.810 |
|
So. |
|
|
|
00:42:49.580 --> 00:42:53.300 |
|
So this was the CNN was first invented |
|
|
|
00:42:53.300 --> 00:42:54.430 |
|
by Jian LeCun. |
|
|
|
00:42:55.220 --> 00:42:58.230 |
|
For character digit recognition in the |
|
|
|
00:42:58.230 --> 00:42:58.930 |
|
late 90s. |
|
|
|
00:43:00.360 --> 00:43:01.249 |
|
I'm pretty sure. |
|
|
|
00:43:01.249 --> 00:43:03.780 |
|
I'm pretty sure this is the first |
|
|
|
00:43:03.780 --> 00:43:04.780 |
|
published CNN. |
|
|
|
00:43:05.950 --> 00:43:07.830 |
|
So here it's a little misleading. |
|
|
|
00:43:07.830 --> 00:43:09.450 |
|
It's showing a letter and then 10 |
|
|
|
00:43:09.450 --> 00:43:12.040 |
|
outputs, but it was applied to both |
|
|
|
00:43:12.040 --> 00:43:14.370 |
|
characters and digits, so. |
|
|
|
00:43:15.270 --> 00:43:17.500 |
|
The Input would be some like. |
|
|
|
00:43:17.500 --> 00:43:18.950 |
|
This was also applied to MNIST. |
|
|
|
00:43:20.030 --> 00:43:21.840 |
|
But the Input would be some digit or |
|
|
|
00:43:21.840 --> 00:43:22.360 |
|
character. |
|
|
|
00:43:23.390 --> 00:43:25.765 |
|
You have like 6 feature maps that were |
|
|
|
00:43:25.765 --> 00:43:28.248 |
|
like really big filters, 28 by 28 or |
|
|
|
00:43:28.248 --> 00:43:28.589 |
|
not. |
|
|
|
00:43:28.590 --> 00:43:29.980 |
|
They're not necessarily big filters, |
|
|
|
00:43:29.980 --> 00:43:30.280 |
|
sorry. |
|
|
|
00:43:30.280 --> 00:43:32.730 |
|
Produce a 28 by 28 Size image after |
|
|
|
00:43:32.730 --> 00:43:34.420 |
|
like filtering the image or applying |
|
|
|
00:43:34.420 --> 00:43:36.700 |
|
these filters to the image, so a value |
|
|
|
00:43:36.700 --> 00:43:37.820 |
|
at each position. |
|
|
|
00:43:38.690 --> 00:43:40.520 |
|
That's like inside of this patch. |
|
|
|
00:43:41.710 --> 00:43:43.110 |
|
They have six feature maps. |
|
|
|
00:43:43.110 --> 00:43:45.410 |
|
Then you do an average pooling, which |
|
|
|
00:43:45.410 --> 00:43:47.220 |
|
means that you average two by two |
|
|
|
00:43:47.220 --> 00:43:47.690 |
|
blocks. |
|
|
|
00:43:48.720 --> 00:43:51.320 |
|
And then you get more feature maps by |
|
|
|
00:43:51.320 --> 00:43:53.900 |
|
applying like filters to these skies, |
|
|
|
00:43:53.900 --> 00:43:56.170 |
|
so a weighted combination of feature |
|
|
|
00:43:56.170 --> 00:43:58.420 |
|
values at each position in local |
|
|
|
00:43:58.420 --> 00:43:59.010 |
|
neighborhoods. |
|
|
|
00:44:00.070 --> 00:44:01.910 |
|
So now we have 16 feature maps that are |
|
|
|
00:44:01.910 --> 00:44:05.120 |
|
size 10 by 10 and then we again do some |
|
|
|
00:44:05.120 --> 00:44:07.520 |
|
average pooling and then we have our |
|
|
|
00:44:07.520 --> 00:44:09.370 |
|
linear layers of the MLP. |
|
|
|
00:44:10.300 --> 00:44:12.470 |
|
And there were sigmoids in between |
|
|
|
00:44:12.470 --> 00:44:12.720 |
|
them. |
|
|
|
00:44:13.670 --> 00:44:16.245 |
|
And so that's the basic idea. |
|
|
|
00:44:16.245 --> 00:44:17.990 |
|
So this was actually like a kind of |
|
|
|
00:44:17.990 --> 00:44:20.070 |
|
like a big deal, but it never got |
|
|
|
00:44:20.070 --> 00:44:22.406 |
|
pushed any further for a long time. |
|
|
|
00:44:22.406 --> 00:44:23.019 |
|
So for. |
|
|
|
00:44:23.850 --> 00:44:25.100 |
|
Between 1998. |
|
|
|
00:44:25.770 --> 00:44:28.790 |
|
In 2012, there were really no more |
|
|
|
00:44:28.790 --> 00:44:30.710 |
|
breakthroughs involving convolutional |
|
|
|
00:44:30.710 --> 00:44:32.270 |
|
neural networks or any form of Deep |
|
|
|
00:44:32.270 --> 00:44:32.650 |
|
Learning. |
|
|
|
00:44:33.600 --> 00:44:37.090 |
|
John LeCun and. |
|
|
|
00:44:37.160 --> 00:44:41.280 |
|
Bateau and Yoshua Bengio and Andrew |
|
|
|
00:44:41.280 --> 00:44:42.860 |
|
Yang and others were like pushing on |
|
|
|
00:44:42.860 --> 00:44:43.410 |
|
Deep Networks. |
|
|
|
00:44:43.410 --> 00:44:45.270 |
|
They're writing papers like why this |
|
|
|
00:44:45.270 --> 00:44:47.870 |
|
makes sense, why it's like the right |
|
|
|
00:44:47.870 --> 00:44:48.410 |
|
thing to do. |
|
|
|
00:44:49.250 --> 00:44:50.700 |
|
And they're trying to get them to work, |
|
|
|
00:44:50.700 --> 00:44:52.560 |
|
but they just kind of couldn't. |
|
|
|
00:44:52.560 --> 00:44:55.310 |
|
Like they were hard to train and just |
|
|
|
00:44:55.310 --> 00:44:56.950 |
|
not getting results that were better |
|
|
|
00:44:56.950 --> 00:44:58.509 |
|
than other approaches that were better |
|
|
|
00:44:58.510 --> 00:44:58.990 |
|
understood. |
|
|
|
00:44:59.750 --> 00:45:02.070 |
|
So people give up on Deep Networks in |
|
|
|
00:45:02.070 --> 00:45:04.370 |
|
MLP and Convolutional Nets. |
|
|
|
00:45:05.090 --> 00:45:06.648 |
|
And we're just doing like SVMS and |
|
|
|
00:45:06.648 --> 00:45:08.536 |
|
things that were in random forests and |
|
|
|
00:45:08.536 --> 00:45:09.760 |
|
things that had better theoretical |
|
|
|
00:45:09.760 --> 00:45:10.560 |
|
justification. |
|
|
|
00:45:11.600 --> 00:45:12.850 |
|
And there are some of the researchers |
|
|
|
00:45:12.850 --> 00:45:14.590 |
|
got really frustrated, like Jian |
|
|
|
00:45:14.590 --> 00:45:16.106 |
|
Lacour, and wrote a letter that said he |
|
|
|
00:45:16.106 --> 00:45:17.950 |
|
was like not going to CVPR anymore |
|
|
|
00:45:17.950 --> 00:45:20.000 |
|
because he's because they're rejecting |
|
|
|
00:45:20.000 --> 00:45:22.270 |
|
his papers and he was quitting. |
|
|
|
00:45:22.270 --> 00:45:24.086 |
|
I mean, he didn't quit, but he quit |
|
|
|
00:45:24.086 --> 00:45:24.349 |
|
CVPR. |
|
|
|
00:45:25.510 --> 00:45:27.270 |
|
I can kind of like poke at him a bit |
|
|
|
00:45:27.270 --> 00:45:28.570 |
|
because now he's made millions of |
|
|
|
00:45:28.570 --> 00:45:30.567 |
|
dollars and won the Turing award, so he |
|
|
|
00:45:30.567 --> 00:45:32.240 |
|
got, he got his rewards. |
|
|
|
00:45:35.350 --> 00:45:39.130 |
|
So all this changed in 2012. |
|
|
|
00:45:39.780 --> 00:45:41.385 |
|
And one of the things that happened is |
|
|
|
00:45:41.385 --> 00:45:43.633 |
|
that this big data set was created by |
|
|
|
00:45:43.633 --> 00:45:45.166 |
|
Faye Faye Lee and her students. |
|
|
|
00:45:45.166 --> 00:45:48.278 |
|
She was actually at UEC and then she |
|
|
|
00:45:48.278 --> 00:45:49.590 |
|
went to Princeton and then she went to |
|
|
|
00:45:49.590 --> 00:45:49.890 |
|
Stanford. |
|
|
|
00:45:52.110 --> 00:45:56.140 |
|
There were fourteen million, so they |
|
|
|
00:45:56.140 --> 00:45:58.140 |
|
got a ton of images, a ton of different |
|
|
|
00:45:58.140 --> 00:45:58.790 |
|
classes. |
|
|
|
00:45:59.530 --> 00:46:00.980 |
|
And they labeled them. |
|
|
|
00:46:00.980 --> 00:46:02.990 |
|
So it was this enormous at the end, |
|
|
|
00:46:02.990 --> 00:46:06.250 |
|
this enormous data set that had 1.2 |
|
|
|
00:46:06.250 --> 00:46:09.330 |
|
million Training images in 1000 |
|
|
|
00:46:09.330 --> 00:46:10.180 |
|
different classes. |
|
|
|
00:46:10.180 --> 00:46:12.090 |
|
So a lot of data to learn from. |
|
|
|
00:46:13.430 --> 00:46:15.440 |
|
A lot of researchers weren't like all |
|
|
|
00:46:15.440 --> 00:46:16.830 |
|
that interested in this because |
|
|
|
00:46:16.830 --> 00:46:18.810 |
|
Classification is a relatively simple |
|
|
|
00:46:18.810 --> 00:46:21.140 |
|
problem compared to object detection or |
|
|
|
00:46:21.140 --> 00:46:22.980 |
|
segmentation or other kinds of vision |
|
|
|
00:46:22.980 --> 00:46:23.420 |
|
problems. |
|
|
|
00:46:25.180 --> 00:46:26.660 |
|
But there were challenges that were |
|
|
|
00:46:26.660 --> 00:46:28.160 |
|
held a year to year. |
|
|
|
00:46:29.950 --> 00:46:33.720 |
|
And so and one of these challenges that |
|
|
|
00:46:33.720 --> 00:46:35.740 |
|
2012 ImageNet Challenge. |
|
|
|
00:46:36.720 --> 00:46:38.090 |
|
There are a lot of methods that were |
|
|
|
00:46:38.090 --> 00:46:39.710 |
|
proposed and they all got pretty |
|
|
|
00:46:39.710 --> 00:46:41.090 |
|
similar results. |
|
|
|
00:46:41.090 --> 00:46:44.347 |
|
So you can see one of the methods got |
|
|
|
00:46:44.347 --> 00:46:46.890 |
|
35% error, one got 30% error, these |
|
|
|
00:46:46.890 --> 00:46:49.280 |
|
others got like maybe 27% error. |
|
|
|
00:46:50.440 --> 00:46:54.520 |
|
And then there is one more that got 15% |
|
|
|
00:46:54.520 --> 00:46:54.930 |
|
error. |
|
|
|
00:46:55.860 --> 00:46:59.210 |
|
And it's like if you see for a couple |
|
|
|
00:46:59.210 --> 00:47:01.630 |
|
years, everybody's getting like 25 to |
|
|
|
00:47:01.630 --> 00:47:03.640 |
|
30% error and then all of a sudden |
|
|
|
00:47:03.640 --> 00:47:05.580 |
|
somebody gets 15% error. |
|
|
|
00:47:05.580 --> 00:47:07.160 |
|
That's like a big difference. |
|
|
|
00:47:07.160 --> 00:47:08.717 |
|
It's like, what the heck happened? |
|
|
|
00:47:08.717 --> 00:47:09.458 |
|
How is that? |
|
|
|
00:47:09.458 --> 00:47:10.760 |
|
How is that possible? |
|
|
|
00:47:11.630 --> 00:47:11.930 |
|
So. |
|
|
|
00:47:13.740 --> 00:47:17.180 |
|
And I was actually at this workshop at |
|
|
|
00:47:17.180 --> 00:47:21.740 |
|
Ecv in France, in Marseille, I think. |
|
|
|
00:47:22.450 --> 00:47:25.260 |
|
And I remember it like people were |
|
|
|
00:47:25.260 --> 00:47:25.510 |
|
pretty. |
|
|
|
00:47:25.510 --> 00:47:27.113 |
|
Everyone was talking about it and was |
|
|
|
00:47:27.113 --> 00:47:28.090 |
|
like, what does this mean? |
|
|
|
00:47:28.090 --> 00:47:29.480 |
|
Did Deep Learning finally work? |
|
|
|
00:47:29.480 --> 00:47:31.910 |
|
And, like, now we have to start paying |
|
|
|
00:47:31.910 --> 00:47:33.990 |
|
attention to these people? |
|
|
|
00:47:33.990 --> 00:47:35.543 |
|
So they're really astonished. |
|
|
|
00:47:35.543 --> 00:47:37.750 |
|
I mean, everyone was really astonished. |
|
|
|
00:47:37.750 --> 00:47:40.280 |
|
And this was what was behind us, this |
|
|
|
00:47:40.280 --> 00:47:40.960 |
|
AlexNet. |
|
|
|
00:47:41.890 --> 00:47:42.830 |
|
So AlexNet. |
|
|
|
00:47:43.540 --> 00:47:46.010 |
|
With this same kind of network as |
|
|
|
00:47:46.010 --> 00:47:48.950 |
|
LeCun's network with just some changes. |
|
|
|
00:47:48.950 --> 00:47:52.373 |
|
So same kind of Convolution and pool. |
|
|
|
00:47:52.373 --> 00:47:54.610 |
|
Convolution and pool followed by dient |
|
|
|
00:47:54.610 --> 00:47:55.080 |
|
flares. |
|
|
|
00:47:56.080 --> 00:47:58.673 |
|
But one difference is that so there's |
|
|
|
00:47:58.673 --> 00:48:00.650 |
|
important differences in non important |
|
|
|
00:48:00.650 --> 00:48:02.220 |
|
differences and at the time people |
|
|
|
00:48:02.220 --> 00:48:03.456 |
|
didn't really know what was important |
|
|
|
00:48:03.456 --> 00:48:04.270 |
|
and what wasn't. |
|
|
|
00:48:04.270 --> 00:48:07.306 |
|
But a non important difference was Max |
|
|
|
00:48:07.306 --> 00:48:08.740 |
|
pooling versus average pooling. |
|
|
|
00:48:08.740 --> 00:48:10.950 |
|
Taking the Max a little window, little |
|
|
|
00:48:10.950 --> 00:48:12.470 |
|
groups of pixels instead of the average |
|
|
|
00:48:12.470 --> 00:48:13.350 |
|
when you downsample. |
|
|
|
00:48:14.440 --> 00:48:16.040 |
|
An important difference was ReLU |
|
|
|
00:48:16.040 --> 00:48:18.140 |
|
nonlinearity instead of Sigmoid. |
|
|
|
00:48:18.140 --> 00:48:19.820 |
|
That made it much more optimizable. |
|
|
|
00:48:21.010 --> 00:48:22.550 |
|
An important difference was that there |
|
|
|
00:48:22.550 --> 00:48:24.340 |
|
was a lot more data to learn from. |
|
|
|
00:48:24.340 --> 00:48:27.010 |
|
You had these thousand classes and 1.2 |
|
|
|
00:48:27.010 --> 00:48:28.680 |
|
million images where previously |
|
|
|
00:48:28.680 --> 00:48:30.360 |
|
datasets were created that were just |
|
|
|
00:48:30.360 --> 00:48:31.950 |
|
big enough for the current algorithms. |
|
|
|
00:48:32.560 --> 00:48:35.170 |
|
So actually like people found that you |
|
|
|
00:48:35.170 --> 00:48:38.000 |
|
kind of you might have like a 10,000 |
|
|
|
00:48:38.000 --> 00:48:39.436 |
|
images in your data set and people |
|
|
|
00:48:39.436 --> 00:48:40.660 |
|
found well if you make it bigger, |
|
|
|
00:48:40.660 --> 00:48:42.300 |
|
things don't really get better anyway. |
|
|
|
00:48:42.300 --> 00:48:44.370 |
|
So no point wasting all that time |
|
|
|
00:48:44.370 --> 00:48:45.390 |
|
making a bigger dataset. |
|
|
|
00:48:46.820 --> 00:48:48.690 |
|
But you needed that data for these |
|
|
|
00:48:48.690 --> 00:48:49.220 |
|
Networks. |
|
|
|
00:48:50.640 --> 00:48:54.800 |
|
They made a bigger model than than Jian |
|
|
|
00:48:54.800 --> 00:48:55.560 |
|
Laguna's Model. |
|
|
|
00:48:56.270 --> 00:48:57.770 |
|
60 million parameters. |
|
|
|
00:48:57.770 --> 00:49:00.260 |
|
It's actually a really big Model, even |
|
|
|
00:49:00.260 --> 00:49:01.440 |
|
by today's standards. |
|
|
|
00:49:01.440 --> 00:49:02.990 |
|
You often use smaller models in this. |
|
|
|
00:49:04.590 --> 00:49:06.910 |
|
I mean, it's not really big, but it's |
|
|
|
00:49:06.910 --> 00:49:09.190 |
|
pretty big GPU. |
|
|
|
00:49:09.190 --> 00:49:10.940 |
|
And then they had a GPU implementation |
|
|
|
00:49:10.940 --> 00:49:13.120 |
|
which gave A50X speedup over the CPU. |
|
|
|
00:49:13.120 --> 00:49:14.280 |
|
So that meant that you could do the |
|
|
|
00:49:14.280 --> 00:49:16.720 |
|
optimization where before they Trained |
|
|
|
00:49:16.720 --> 00:49:18.020 |
|
on 2 GPUs for a week. |
|
|
|
00:49:18.020 --> 00:49:20.300 |
|
But if you imagine A50X speedup, it |
|
|
|
00:49:20.300 --> 00:49:23.680 |
|
would have taken a year on CPUs. |
|
|
|
00:49:24.300 --> 00:49:26.290 |
|
So obviously, like if you're a network, |
|
|
|
00:49:26.290 --> 00:49:28.450 |
|
if your Model takes a year to train, |
|
|
|
00:49:28.450 --> 00:49:30.220 |
|
that's kind of like a little too long. |
|
|
|
00:49:32.230 --> 00:49:33.640 |
|
And then they did this Dropout |
|
|
|
00:49:33.640 --> 00:49:35.150 |
|
regularization, which I won't talk |
|
|
|
00:49:35.150 --> 00:49:36.740 |
|
about because it's actually turned out |
|
|
|
00:49:36.740 --> 00:49:37.650 |
|
not to be all that important. |
|
|
|
00:49:38.370 --> 00:49:40.330 |
|
But it is something worth knowing if |
|
|
|
00:49:40.330 --> 00:49:41.920 |
|
you want to be a Deep Learning expert. |
|
|
|
00:49:44.530 --> 00:49:47.340 |
|
What enabled the breakthrough is this |
|
|
|
00:49:47.340 --> 00:49:50.660 |
|
ReLU Activation enabled large models to |
|
|
|
00:49:50.660 --> 00:49:52.420 |
|
be optimized because the Gradients more |
|
|
|
00:49:52.420 --> 00:49:53.900 |
|
easily flow through the network, where |
|
|
|
00:49:53.900 --> 00:49:55.620 |
|
the Sigmoid like squeezes off the |
|
|
|
00:49:55.620 --> 00:49:56.460 |
|
Gradients up both ends. |
|
|
|
00:49:58.080 --> 00:50:00.300 |
|
There is a ImageNet data set provided |
|
|
|
00:50:00.300 --> 00:50:02.861 |
|
diverse and massive annotation to take |
|
|
|
00:50:02.861 --> 00:50:05.068 |
|
advantage of that could take so that |
|
|
|
00:50:05.068 --> 00:50:08.170 |
|
could take advantage of the models or |
|
|
|
00:50:08.170 --> 00:50:09.530 |
|
the models could take advantage of this |
|
|
|
00:50:09.530 --> 00:50:11.310 |
|
large data they need each other. |
|
|
|
00:50:12.350 --> 00:50:14.640 |
|
And then there's GPU processing that |
|
|
|
00:50:14.640 --> 00:50:16.510 |
|
made the optimization practicable, |
|
|
|
00:50:16.510 --> 00:50:17.080 |
|
practicable. |
|
|
|
00:50:17.080 --> 00:50:19.450 |
|
So you needed like basically all three |
|
|
|
00:50:19.450 --> 00:50:21.110 |
|
of these ingredients at once in order |
|
|
|
00:50:21.110 --> 00:50:21.980 |
|
to make the breakthrough. |
|
|
|
00:50:21.980 --> 00:50:23.210 |
|
So that's why even though there are |
|
|
|
00:50:23.210 --> 00:50:24.810 |
|
people pushing on, it didn't. |
|
|
|
00:50:26.150 --> 00:50:26.990 |
|
It took a while. |
|
|
|
00:50:29.280 --> 00:50:31.020 |
|
So it wasn't just ImageNet and |
|
|
|
00:50:31.020 --> 00:50:31.930 |
|
Classification? |
|
|
|
00:50:32.840 --> 00:50:34.120 |
|
It turned out all kinds of other |
|
|
|
00:50:34.120 --> 00:50:36.280 |
|
problems also benefited tremendously |
|
|
|
00:50:36.280 --> 00:50:38.550 |
|
from Deep Learning, and in pretty |
|
|
|
00:50:38.550 --> 00:50:39.250 |
|
simple ways. |
|
|
|
00:50:39.250 --> 00:50:42.210 |
|
So, like in the next two years later, |
|
|
|
00:50:42.210 --> 00:50:43.990 |
|
Girshick et al. |
|
|
|
00:50:44.140 --> 00:50:44.690 |
|
|
|
|
|
00:50:45.670 --> 00:50:48.380 |
|
Found that if you take a network that |
|
|
|
00:50:48.380 --> 00:50:50.400 |
|
has been trained on Imagenet and you |
|
|
|
00:50:50.400 --> 00:50:52.260 |
|
use it for object detection. |
|
|
|
00:50:52.260 --> 00:50:54.590 |
|
So you basically just like make, use it |
|
|
|
00:50:54.590 --> 00:50:56.550 |
|
to analyze like each patch of the image |
|
|
|
00:50:56.550 --> 00:50:58.720 |
|
and make predictions off of those |
|
|
|
00:50:58.720 --> 00:51:01.225 |
|
features that are generated from the |
|
|
|
00:51:01.225 --> 00:51:01.500 |
|
ImageNet. |
|
|
|
00:51:02.250 --> 00:51:04.520 |
|
Network for each patch. |
|
|
|
00:51:04.520 --> 00:51:06.945 |
|
Then they were able to get a big boost |
|
|
|
00:51:06.945 --> 00:51:08.040 |
|
in Detection. |
|
|
|
00:51:08.040 --> 00:51:10.170 |
|
So again, if you think about it, this |
|
|
|
00:51:10.170 --> 00:51:12.620 |
|
is the Dalal Triggs detector that I |
|
|
|
00:51:12.620 --> 00:51:14.710 |
|
talked about in the context of SVM. |
|
|
|
00:51:16.230 --> 00:51:17.690 |
|
And then there's like these Deformable |
|
|
|
00:51:17.690 --> 00:51:19.440 |
|
parts models which are like more |
|
|
|
00:51:19.440 --> 00:51:21.700 |
|
complex models modeling the parts of |
|
|
|
00:51:21.700 --> 00:51:22.260 |
|
the objects. |
|
|
|
00:51:23.080 --> 00:51:25.570 |
|
You get some improvement over A6 year |
|
|
|
00:51:25.570 --> 00:51:28.920 |
|
period from .2 to .4. |
|
|
|
00:51:28.920 --> 00:51:29.940 |
|
Higher is better here. |
|
|
|
00:51:30.720 --> 00:51:32.770 |
|
And then in one year it goes from .4 to |
|
|
|
00:51:32.770 --> 00:51:36.170 |
|
6, so again a huge jump and then this |
|
|
|
00:51:36.170 --> 00:51:39.960 |
|
rapidly even shut up higher and |
|
|
|
00:51:39.960 --> 00:51:40.610 |
|
following years. |
|
|
|
00:51:42.160 --> 00:51:43.430 |
|
And then there are papers like this |
|
|
|
00:51:43.430 --> 00:51:45.240 |
|
that showed, hey, if you just take the |
|
|
|
00:51:45.240 --> 00:51:47.890 |
|
features from this network that's |
|
|
|
00:51:47.890 --> 00:51:50.400 |
|
trained on Imagenet and you apply it to |
|
|
|
00:51:50.400 --> 00:51:52.350 |
|
a whole range of Classification task. |
|
|
|
00:51:53.010 --> 00:51:55.810 |
|
It outperforms the classifiers that |
|
|
|
00:51:55.810 --> 00:51:58.250 |
|
were that had handcrafted features for |
|
|
|
00:51:58.250 --> 00:51:59.300 |
|
each of these data sets. |
|
|
|
00:52:00.280 --> 00:52:02.790 |
|
So basically just like everything was |
|
|
|
00:52:02.790 --> 00:52:04.970 |
|
being reset like expectations and what |
|
|
|
00:52:04.970 --> 00:52:08.360 |
|
kind of performance is achievable and |
|
|
|
00:52:08.360 --> 00:52:09.925 |
|
Deep Networks were outperforming |
|
|
|
00:52:09.925 --> 00:52:10.580 |
|
everything. |
|
|
|
00:52:13.370 --> 00:52:13.780 |
|
So. |
|
|
|
00:52:14.650 --> 00:52:17.350 |
|
I'm not going to take the full break, |
|
|
|
00:52:17.350 --> 00:52:19.390 |
|
sorry, but I will show you this video. |
|
|
|
00:52:20.860 --> 00:52:22.610 |
|
So it was kind of, it was pretty |
|
|
|
00:52:22.610 --> 00:52:23.640 |
|
interesting time. |
|
|
|
00:52:23.640 --> 00:52:26.595 |
|
It's really a Deep, it's truly like a |
|
|
|
00:52:26.595 --> 00:52:28.230 |
|
Deep Learning revolution for machine |
|
|
|
00:52:28.230 --> 00:52:29.180 |
|
learning. |
|
|
|
00:52:29.180 --> 00:52:30.980 |
|
All the other methods and concepts are |
|
|
|
00:52:30.980 --> 00:52:34.150 |
|
still applicable, but a lot of the high |
|
|
|
00:52:34.150 --> 00:52:36.180 |
|
performance is coming out of the use of |
|
|
|
00:52:36.180 --> 00:52:37.620 |
|
big data and Deep Learning. |
|
|
|
00:52:37.620 --> 00:52:37.950 |
|
Question. |
|
|
|
00:52:45.560 --> 00:52:46.510 |
|
Do annotated them. |
|
|
|
00:52:48.240 --> 00:52:50.040 |
|
So I think they use what's called |
|
|
|
00:52:50.040 --> 00:52:51.410 |
|
Amazon Mechanical Turk. |
|
|
|
00:52:51.410 --> 00:52:53.990 |
|
So that's like a crowdsourcing platform |
|
|
|
00:52:53.990 --> 00:52:55.050 |
|
where you can put up. |
|
|
|
00:52:56.050 --> 00:52:58.110 |
|
Somebody like tabs through images and |
|
|
|
00:52:58.110 --> 00:53:00.730 |
|
you pay them to. |
|
|
|
00:53:00.840 --> 00:53:01.430 |
|
Label them. |
|
|
|
00:53:02.220 --> 00:53:04.065 |
|
But they first, So what they did is |
|
|
|
00:53:04.065 --> 00:53:04.570 |
|
they actually. |
|
|
|
00:53:04.570 --> 00:53:05.910 |
|
It's not a stupid question by the way. |
|
|
|
00:53:05.910 --> 00:53:07.560 |
|
It's like how you annotate, how do you |
|
|
|
00:53:07.560 --> 00:53:07.980 |
|
get data. |
|
|
|
00:53:07.980 --> 00:53:09.710 |
|
Annotation is like the key problem in |
|
|
|
00:53:09.710 --> 00:53:10.380 |
|
applications. |
|
|
|
00:53:11.680 --> 00:53:12.310 |
|
But. |
|
|
|
00:53:14.080 --> 00:53:16.000 |
|
What they did is they first they use |
|
|
|
00:53:16.000 --> 00:53:18.870 |
|
Wordnet to get a set of like different |
|
|
|
00:53:18.870 --> 00:53:21.680 |
|
nouns and then they use image search to |
|
|
|
00:53:21.680 --> 00:53:23.280 |
|
download images that correspond to |
|
|
|
00:53:23.280 --> 00:53:24.320 |
|
those nouns. |
|
|
|
00:53:24.320 --> 00:53:25.829 |
|
So then they needed people to like |
|
|
|
00:53:25.830 --> 00:53:27.565 |
|
curate the data to say whether or not |
|
|
|
00:53:27.565 --> 00:53:29.250 |
|
like if they searched for. |
|
|
|
00:53:30.300 --> 00:53:32.640 |
|
For golden retriever for example, like |
|
|
|
00:53:32.640 --> 00:53:34.183 |
|
make sure that it's actually a golden |
|
|
|
00:53:34.183 --> 00:53:36.200 |
|
retriever, so kind of clean the labels |
|
|
|
00:53:36.200 --> 00:53:38.580 |
|
rather than assign it to one out of |
|
|
|
00:53:38.580 --> 00:53:39.200 |
|
1000 labels. |
|
|
|
00:53:40.280 --> 00:53:41.870 |
|
But it was pretty massive project. |
|
|
|
00:53:42.710 --> 00:53:42.930 |
|
Yeah. |
|
|
|
00:53:45.130 --> 00:53:49.140 |
|
So at the time, it felt like computer |
|
|
|
00:53:49.140 --> 00:53:50.409 |
|
vision researchers were like the |
|
|
|
00:53:50.410 --> 00:53:52.921 |
|
samurai, like you like Learned all |
|
|
|
00:53:52.921 --> 00:53:54.940 |
|
these, made friends with the pixels you |
|
|
|
00:53:54.940 --> 00:53:56.930 |
|
had, learned all these feature |
|
|
|
00:53:56.930 --> 00:53:57.450 |
|
representations. |
|
|
|
00:53:57.450 --> 00:53:59.430 |
|
You Applied your expertise to solve the |
|
|
|
00:53:59.430 --> 00:53:59.880 |
|
problems. |
|
|
|
00:54:00.940 --> 00:54:02.530 |
|
And then big data came along. |
|
|
|
00:54:03.640 --> 00:54:05.510 |
|
And Deep Learning. |
|
|
|
00:54:06.360 --> 00:54:07.920 |
|
And it's not that inappropriate. |
|
|
|
00:54:07.920 --> 00:54:08.550 |
|
Don't worry. |
|
|
|
00:54:11.290 --> 00:54:12.280 |
|
And. |
|
|
|
00:54:13.140 --> 00:54:15.040 |
|
It was like this scene in the Last |
|
|
|
00:54:15.040 --> 00:54:15.780 |
|
samurai. |
|
|
|
00:54:16.720 --> 00:54:18.610 |
|
Where there's these like. |
|
|
|
00:54:19.270 --> 00:54:21.680 |
|
Craftsman of war and of combat. |
|
|
|
00:54:21.680 --> 00:54:24.097 |
|
And then the other side buys these |
|
|
|
00:54:24.097 --> 00:54:27.060 |
|
Gatling guns and just pours bullets |
|
|
|
00:54:27.060 --> 00:54:28.400 |
|
into the Gatling guns. |
|
|
|
00:54:29.720 --> 00:54:32.120 |
|
And justice moves down the samurai. |
|
|
|
00:54:37.180 --> 00:54:39.150 |
|
So that was basically Deep Learning. |
|
|
|
00:54:39.150 --> 00:54:40.420 |
|
It's like you no longer like |
|
|
|
00:54:40.420 --> 00:54:42.090 |
|
handcrafting these features and |
|
|
|
00:54:42.090 --> 00:54:43.840 |
|
applying all of this art and knowledge. |
|
|
|
00:54:43.840 --> 00:54:45.516 |
|
You just have this big network and you |
|
|
|
00:54:45.516 --> 00:54:47.865 |
|
just like pour in data and it totally |
|
|
|
00:54:47.865 --> 00:54:49.360 |
|
like massacres all the other |
|
|
|
00:54:49.360 --> 00:54:50.220 |
|
algorithms. |
|
|
|
00:54:58.600 --> 00:54:59.210 |
|
Yeah. |
|
|
|
00:55:10.130 --> 00:55:12.380 |
|
What is the next thing? |
|
|
|
00:55:17.790 --> 00:55:20.040 |
|
So all right, so in my personal |
|
|
|
00:55:20.040 --> 00:55:23.350 |
|
opinion, so to me the limitation |
|
|
|
00:55:23.350 --> 00:55:25.340 |
|
there's two major limitations of Deep |
|
|
|
00:55:25.340 --> 00:55:25.690 |
|
Learning. |
|
|
|
00:55:26.470 --> 00:55:28.060 |
|
One is that the Networks. |
|
|
|
00:55:28.060 --> 00:55:30.535 |
|
There's only there's one kind of |
|
|
|
00:55:30.535 --> 00:55:31.460 |
|
network structure. |
|
|
|
00:55:31.460 --> 00:55:33.450 |
|
All the information is encoded within |
|
|
|
00:55:33.450 --> 00:55:34.440 |
|
the weights of the network. |
|
|
|
00:55:35.330 --> 00:55:38.270 |
|
For humans, for example, we actually |
|
|
|
00:55:38.270 --> 00:55:39.340 |
|
have different kinds of memory |
|
|
|
00:55:39.340 --> 00:55:40.070 |
|
structures. |
|
|
|
00:55:40.070 --> 00:55:42.440 |
|
We have like the ability to remember |
|
|
|
00:55:42.440 --> 00:55:43.245 |
|
independent facts. |
|
|
|
00:55:43.245 --> 00:55:45.300 |
|
We also have our implicit memory, which |
|
|
|
00:55:45.300 --> 00:55:46.659 |
|
guides our action and like is |
|
|
|
00:55:46.660 --> 00:55:49.400 |
|
immediately like kind of like |
|
|
|
00:55:49.400 --> 00:55:51.260 |
|
accumulates a lot of information. |
|
|
|
00:55:51.260 --> 00:55:53.550 |
|
We have muscle memory, which is based |
|
|
|
00:55:53.550 --> 00:55:55.180 |
|
on repetition, like reinforcement |
|
|
|
00:55:55.180 --> 00:55:55.730 |
|
learning. |
|
|
|
00:55:55.730 --> 00:55:57.930 |
|
And that muscle memory, like never goes |
|
|
|
00:55:57.930 --> 00:55:58.470 |
|
away. |
|
|
|
00:55:58.470 --> 00:56:00.110 |
|
It's retained for like 20 years. |
|
|
|
00:56:00.110 --> 00:56:01.760 |
|
So we have many different memory |
|
|
|
00:56:01.760 --> 00:56:04.350 |
|
systems in our bodies and brains. |
|
|
|
00:56:05.070 --> 00:56:07.530 |
|
But the memory systems used by Deep |
|
|
|
00:56:07.530 --> 00:56:09.170 |
|
Learning are homogeneous. |
|
|
|
00:56:09.170 --> 00:56:10.720 |
|
So I think like figuring out how do we |
|
|
|
00:56:10.720 --> 00:56:12.713 |
|
create more heterogeneous memory |
|
|
|
00:56:12.713 --> 00:56:14.950 |
|
systems that can have different |
|
|
|
00:56:14.950 --> 00:56:16.970 |
|
advantages, but work together to solve |
|
|
|
00:56:16.970 --> 00:56:18.740 |
|
tasks is one thing. |
|
|
|
00:56:19.620 --> 00:56:22.360 |
|
Another is that the systems are still |
|
|
|
00:56:22.360 --> 00:56:23.830 |
|
essentially pattern recognition. |
|
|
|
00:56:23.830 --> 00:56:25.310 |
|
So you have what's called sequence of |
|
|
|
00:56:25.310 --> 00:56:27.380 |
|
sequence Networks for example, where |
|
|
|
00:56:27.380 --> 00:56:29.411 |
|
like text comes in, text goes out or |
|
|
|
00:56:29.411 --> 00:56:31.469 |
|
Image comes in, Image in, text comes in |
|
|
|
00:56:31.470 --> 00:56:33.059 |
|
and text goes out or Image comes out. |
|
|
|
00:56:33.970 --> 00:56:35.330 |
|
But they're like one shot. |
|
|
|
00:56:36.020 --> 00:56:37.543 |
|
Or like a lot of things that we do, if |
|
|
|
00:56:37.543 --> 00:56:39.625 |
|
you're writing, if you're going to |
|
|
|
00:56:39.625 --> 00:56:40.750 |
|
like, I don't know, order a plane |
|
|
|
00:56:40.750 --> 00:56:42.017 |
|
ticket or something, there's a bunch of |
|
|
|
00:56:42.017 --> 00:56:43.425 |
|
steps that you go through. |
|
|
|
00:56:43.425 --> 00:56:46.410 |
|
And so you make a plan, you execute |
|
|
|
00:56:46.410 --> 00:56:48.100 |
|
that plan, and each of those steps |
|
|
|
00:56:48.100 --> 00:56:49.550 |
|
involves some pattern recognition and |
|
|
|
00:56:49.550 --> 00:56:50.140 |
|
various things. |
|
|
|
00:56:50.740 --> 00:56:52.720 |
|
So there's a lot of compositionality to |
|
|
|
00:56:52.720 --> 00:56:54.770 |
|
the kinds of problems that we solve |
|
|
|
00:56:54.770 --> 00:56:55.310 |
|
day-to-day. |
|
|
|
00:56:55.930 --> 00:56:58.635 |
|
And that compositionality is not really |
|
|
|
00:56:58.635 --> 00:57:00.590 |
|
is only handled to a very limited |
|
|
|
00:57:00.590 --> 00:57:03.060 |
|
extent by these by these Networks by |
|
|
|
00:57:03.060 --> 00:57:03.600 |
|
themselves. |
|
|
|
00:57:03.600 --> 00:57:05.980 |
|
So I think also better ways to form |
|
|
|
00:57:05.980 --> 00:57:07.570 |
|
plans to execute. |
|
|
|
00:57:08.430 --> 00:57:11.420 |
|
In terms of different steps and to make |
|
|
|
00:57:11.420 --> 00:57:14.420 |
|
large problems more modular is also |
|
|
|
00:57:14.420 --> 00:57:14.760 |
|
important. |
|
|
|
00:57:20.090 --> 00:57:20.420 |
|
OK. |
|
|
|
00:57:21.760 --> 00:57:22.782 |
|
So, all right. |
|
|
|
00:57:22.782 --> 00:57:23.392 |
|
So I'm going to. |
|
|
|
00:57:23.392 --> 00:57:24.920 |
|
I'm going to keep going because I want |
|
|
|
00:57:24.920 --> 00:57:25.950 |
|
to. |
|
|
|
00:57:26.400 --> 00:57:27.260 |
|
Because I want to. |
|
|
|
00:57:29.290 --> 00:57:32.500 |
|
So the next part is optimization, so. |
|
|
|
00:57:33.470 --> 00:57:34.720 |
|
The. |
|
|
|
00:57:36.100 --> 00:57:39.124 |
|
So we talked previously about SGD and |
|
|
|
00:57:39.124 --> 00:57:40.910 |
|
the optimization approaches are just |
|
|
|
00:57:40.910 --> 00:57:42.767 |
|
like extensions of SGD. |
|
|
|
00:57:42.767 --> 00:57:45.610 |
|
And these really cool illustrations or |
|
|
|
00:57:45.610 --> 00:57:47.370 |
|
I think they're cool helpful |
|
|
|
00:57:47.370 --> 00:57:49.630 |
|
illustrations are from this data |
|
|
|
00:57:49.630 --> 00:57:51.880 |
|
science site, which somebody really |
|
|
|
00:57:51.880 --> 00:57:53.620 |
|
nicely explains like the different |
|
|
|
00:57:53.620 --> 00:57:55.340 |
|
optimization methods and. |
|
|
|
00:57:56.180 --> 00:57:57.760 |
|
And provides these illustrations. |
|
|
|
00:57:59.690 --> 00:58:00.440 |
|
So. |
|
|
|
00:58:00.590 --> 00:58:01.240 |
|
|
|
|
|
00:58:02.060 --> 00:58:05.090 |
|
They so these different. |
|
|
|
00:58:05.090 --> 00:58:07.710 |
|
All of these are like stochastic |
|
|
|
00:58:07.710 --> 00:58:09.660 |
|
gradient descent, so I don't need to |
|
|
|
00:58:09.660 --> 00:58:10.650 |
|
talk about the algorithm. |
|
|
|
00:58:10.650 --> 00:58:12.607 |
|
They're all based on computing some |
|
|
|
00:58:12.607 --> 00:58:14.900 |
|
Gradient of the loss with respect to |
|
|
|
00:58:14.900 --> 00:58:15.500 |
|
your weights. |
|
|
|
00:58:16.180 --> 00:58:18.170 |
|
And then they just differ in how you |
|
|
|
00:58:18.170 --> 00:58:19.380 |
|
update the weights given that |
|
|
|
00:58:19.380 --> 00:58:20.030 |
|
information. |
|
|
|
00:58:21.070 --> 00:58:23.250 |
|
So this is basic SGD, which we talked |
|
|
|
00:58:23.250 --> 00:58:25.660 |
|
about, some representing the Gradient |
|
|
|
00:58:25.660 --> 00:58:27.020 |
|
of your loss with respect to the |
|
|
|
00:58:27.020 --> 00:58:27.540 |
|
weights. |
|
|
|
00:58:27.540 --> 00:58:29.346 |
|
You multiply it by some negative ETA |
|
|
|
00:58:29.346 --> 00:58:31.170 |
|
and you add it the learning rate, and |
|
|
|
00:58:31.170 --> 00:58:32.450 |
|
then you add it to your previous weight |
|
|
|
00:58:32.450 --> 00:58:32.750 |
|
values. |
|
|
|
00:58:34.010 --> 00:58:35.460 |
|
And this is a nice illustration of |
|
|
|
00:58:35.460 --> 00:58:35.610 |
|
like. |
|
|
|
00:58:36.400 --> 00:58:37.970 |
|
Compute the gradient with respect to |
|
|
|
00:58:37.970 --> 00:58:39.660 |
|
each weight, and then you step in both |
|
|
|
00:58:39.660 --> 00:58:40.880 |
|
those directions, right? |
|
|
|
00:58:43.110 --> 00:58:43.300 |
|
Right. |
|
|
|
00:58:43.300 --> 00:58:45.850 |
|
The next step is Momentum. |
|
|
|
00:58:45.850 --> 00:58:47.914 |
|
So Momentum is what's letting this ball |
|
|
|
00:58:47.914 --> 00:58:49.010 |
|
roll up the hill. |
|
|
|
00:58:49.010 --> 00:58:51.667 |
|
If you just have SGD, then you can roll |
|
|
|
00:58:51.667 --> 00:58:53.120 |
|
down the hill, but you'll never like |
|
|
|
00:58:53.120 --> 00:58:54.494 |
|
really roll up it again because you |
|
|
|
00:58:54.494 --> 00:58:56.229 |
|
don't have any Momentum, because the |
|
|
|
00:58:56.230 --> 00:58:56.981 |
|
Gradient is up. |
|
|
|
00:58:56.981 --> 00:58:58.660 |
|
You don't, you don't go up, you only go |
|
|
|
00:58:58.660 --> 00:58:58.820 |
|
down. |
|
|
|
00:59:00.710 --> 00:59:05.360 |
|
Momentum is important because in these |
|
|
|
00:59:05.360 --> 00:59:08.010 |
|
Multi layer Networks you don't just |
|
|
|
00:59:08.010 --> 00:59:11.000 |
|
have like one good low solution, a big |
|
|
|
00:59:11.000 --> 00:59:12.823 |
|
bowl, you have like lots of pockets in |
|
|
|
00:59:12.823 --> 00:59:14.780 |
|
the bowl so that the solution space |
|
|
|
00:59:14.780 --> 00:59:16.483 |
|
looks more like an egg carton than a |
|
|
|
00:59:16.483 --> 00:59:16.669 |
|
bowl. |
|
|
|
00:59:16.670 --> 00:59:18.230 |
|
There's like lots of little pits. |
|
|
|
00:59:19.120 --> 00:59:20.375 |
|
So you want to be able to roll through |
|
|
|
00:59:20.375 --> 00:59:21.750 |
|
the little pits and get into the big |
|
|
|
00:59:21.750 --> 00:59:21.990 |
|
pits? |
|
|
|
00:59:23.390 --> 00:59:24.766 |
|
I guess join here. |
|
|
|
00:59:24.766 --> 00:59:28.355 |
|
So here the purple ball has Momentum |
|
|
|
00:59:28.355 --> 00:59:30.300 |
|
Momentum and the blue ball does not |
|
|
|
00:59:30.300 --> 00:59:30.711 |
|
have Momentum. |
|
|
|
00:59:30.711 --> 00:59:32.600 |
|
So the blue ball as soon as it rolls |
|
|
|
00:59:32.600 --> 00:59:34.070 |
|
into like a little dip, it gets stuck |
|
|
|
00:59:34.070 --> 00:59:34.250 |
|
there. |
|
|
|
00:59:35.810 --> 00:59:38.010 |
|
Momentum is pretty simple to calculate, |
|
|
|
00:59:38.010 --> 00:59:40.163 |
|
it's just one way to calculate it is |
|
|
|
00:59:40.163 --> 00:59:43.360 |
|
just it's your Gradient plus some like |
|
|
|
00:59:43.360 --> 00:59:45.510 |
|
9 times the last Gradient. |
|
|
|
00:59:45.510 --> 00:59:46.800 |
|
So that way, like the previous |
|
|
|
00:59:46.800 --> 00:59:47.990 |
|
Gradient, you keep moving in that |
|
|
|
00:59:47.990 --> 00:59:48.750 |
|
direction a little bit. |
|
|
|
00:59:49.560 --> 00:59:51.310 |
|
This is another way to represent it, |
|
|
|
00:59:51.310 --> 00:59:52.690 |
|
where we represent this Momentum |
|
|
|
00:59:52.690 --> 00:59:55.750 |
|
variable Mo FWT which is beta times. |
|
|
|
00:59:55.750 --> 00:59:57.590 |
|
The last value beta would be for |
|
|
|
00:59:57.590 --> 00:59:59.940 |
|
example 09 plus the current Gradient. |
|
|
|
01:00:01.120 --> 01:00:02.603 |
|
So you just keep moving. |
|
|
|
01:00:02.603 --> 01:00:04.060 |
|
You prefer to keep moving in the |
|
|
|
01:00:04.060 --> 01:00:04.760 |
|
current direction. |
|
|
|
01:00:05.560 --> 01:00:08.150 |
|
Every even if you call SGD and you do |
|
|
|
01:00:08.150 --> 01:00:10.240 |
|
not mention Momentum to Pie torch by |
|
|
|
01:00:10.240 --> 01:00:12.050 |
|
default it will ease Momentum because |
|
|
|
01:00:12.050 --> 01:00:12.800 |
|
it's pretty important. |
|
|
|
01:00:13.440 --> 01:00:15.340 |
|
And I think the default parameter is .9 |
|
|
|
01:00:15.340 --> 01:00:15.690 |
|
for beta. |
|
|
|
01:00:18.890 --> 01:00:19.280 |
|
Question. |
|
|
|
01:00:25.810 --> 01:00:27.880 |
|
It cannot go up. |
|
|
|
01:00:27.880 --> 01:00:30.520 |
|
So with Manila SGD, you're always |
|
|
|
01:00:30.520 --> 01:00:31.330 |
|
trying to go down. |
|
|
|
01:00:32.040 --> 01:00:33.890 |
|
So you get into a little hole, you go |
|
|
|
01:00:33.890 --> 01:00:35.020 |
|
down into the little hole, and you |
|
|
|
01:00:35.020 --> 01:00:35.930 |
|
can't get back out of it. |
|
|
|
01:00:36.610 --> 01:00:38.330 |
|
But Momentum, if it's a little hole and |
|
|
|
01:00:38.330 --> 01:00:39.970 |
|
you've been rolling fast, you roll up |
|
|
|
01:00:39.970 --> 01:00:41.630 |
|
out of it and you can get into other |
|
|
|
01:00:41.630 --> 01:00:42.040 |
|
ones. |
|
|
|
01:00:42.040 --> 01:00:42.840 |
|
Question. |
|
|
|
01:00:56.070 --> 01:00:57.210 |
|
That's a good question. |
|
|
|
01:00:57.210 --> 01:00:58.820 |
|
So I think the question is like, could |
|
|
|
01:00:58.820 --> 01:01:00.640 |
|
you end up getting into a better |
|
|
|
01:01:00.640 --> 01:01:02.560 |
|
solution and rolling out of it and then |
|
|
|
01:01:02.560 --> 01:01:03.780 |
|
ending up in a worse one? |
|
|
|
01:01:05.100 --> 01:01:05.990 |
|
That can happen. |
|
|
|
01:01:06.860 --> 01:01:07.950 |
|
It's. |
|
|
|
01:01:07.950 --> 01:01:09.360 |
|
I guess it's less likely though, |
|
|
|
01:01:09.360 --> 01:01:10.940 |
|
because the larger holes usually have |
|
|
|
01:01:10.940 --> 01:01:13.180 |
|
like bigger basins too, but. |
|
|
|
01:01:13.300 --> 01:01:17.920 |
|
One thing people do, it's partially for |
|
|
|
01:01:17.920 --> 01:01:20.230 |
|
that but more to more for overfitting |
|
|
|
01:01:20.230 --> 01:01:21.950 |
|
is that you often see checkpoints. |
|
|
|
01:01:21.950 --> 01:01:23.490 |
|
So you might save your Model at various |
|
|
|
01:01:23.490 --> 01:01:25.662 |
|
points and at the end Choose the model |
|
|
|
01:01:25.662 --> 01:01:28.440 |
|
that had the lowest validation loss, |
|
|
|
01:01:28.440 --> 01:01:30.160 |
|
the OR the lowest validation error. |
|
|
|
01:01:31.320 --> 01:01:33.230 |
|
So that even if you were to further |
|
|
|
01:01:33.230 --> 01:01:34.930 |
|
optimize into a bad solution, you can |
|
|
|
01:01:34.930 --> 01:01:35.640 |
|
go back. |
|
|
|
01:01:35.640 --> 01:01:37.250 |
|
There's also like more complex |
|
|
|
01:01:37.250 --> 01:01:39.940 |
|
algorithms that are I forget what it's |
|
|
|
01:01:39.940 --> 01:01:41.300 |
|
called now, but when you go back and |
|
|
|
01:01:41.300 --> 01:01:43.770 |
|
forth, so you take, you take really |
|
|
|
01:01:43.770 --> 01:01:45.270 |
|
aggressive steps and then you back |
|
|
|
01:01:45.270 --> 01:01:47.436 |
|
trace if you need to and then you take |
|
|
|
01:01:47.436 --> 01:01:48.909 |
|
like more aggressive steps and back |
|
|
|
01:01:48.909 --> 01:01:50.610 |
|
trace it's look ahead and something |
|
|
|
01:01:50.610 --> 01:01:50.750 |
|
else. |
|
|
|
01:01:53.020 --> 01:01:54.730 |
|
So there's like more complex algorithms |
|
|
|
01:01:54.730 --> 01:01:55.680 |
|
that try to deal with that. |
|
|
|
01:01:58.700 --> 01:02:01.270 |
|
So the other thing by the way that |
|
|
|
01:02:01.270 --> 01:02:03.705 |
|
helps with this is the Stochastic part |
|
|
|
01:02:03.705 --> 01:02:04.550 |
|
of SGD. |
|
|
|
01:02:04.550 --> 01:02:07.000 |
|
Different little samples of data will |
|
|
|
01:02:07.000 --> 01:02:08.300 |
|
actually have different Gradients. |
|
|
|
01:02:08.300 --> 01:02:10.370 |
|
So what might be a pit for one data |
|
|
|
01:02:10.370 --> 01:02:12.160 |
|
sample is not a pit for another data |
|
|
|
01:02:12.160 --> 01:02:12.460 |
|
sample. |
|
|
|
01:02:13.120 --> 01:02:15.620 |
|
And so that can help you get out of |
|
|
|
01:02:15.620 --> 01:02:19.390 |
|
like little help with the optimization |
|
|
|
01:02:19.390 --> 01:02:19.910 |
|
that way too. |
|
|
|
01:02:22.830 --> 01:02:24.050 |
|
Alright, so there's another thing. |
|
|
|
01:02:24.050 --> 01:02:25.865 |
|
Now we're not doing Momentum anymore. |
|
|
|
01:02:25.865 --> 01:02:29.060 |
|
We're just trying to regularize our |
|
|
|
01:02:29.060 --> 01:02:30.060 |
|
Descent. |
|
|
|
01:02:30.170 --> 01:02:30.680 |
|
|
|
|
|
01:02:31.330 --> 01:02:34.863 |
|
So the intuition behind this is that in |
|
|
|
01:02:34.863 --> 01:02:37.609 |
|
some cases is that in some cases some |
|
|
|
01:02:37.610 --> 01:02:39.230 |
|
weights might not be initialized very |
|
|
|
01:02:39.230 --> 01:02:39.520 |
|
well. |
|
|
|
01:02:40.240 --> 01:02:42.027 |
|
And so they're not really like |
|
|
|
01:02:42.027 --> 01:02:44.343 |
|
contributing to the Output very much. |
|
|
|
01:02:44.343 --> 01:02:46.039 |
|
And as a result they don't get |
|
|
|
01:02:46.040 --> 01:02:47.882 |
|
optimized much because they're not |
|
|
|
01:02:47.882 --> 01:02:48.168 |
|
contributing. |
|
|
|
01:02:48.168 --> 01:02:50.145 |
|
So they don't get, they basically don't |
|
|
|
01:02:50.145 --> 01:02:51.360 |
|
get touched, they get left alone. |
|
|
|
01:02:52.350 --> 01:02:54.840 |
|
The idea of AdaGrad is that you want |
|
|
|
01:02:54.840 --> 01:02:57.410 |
|
to, like optimize, allow each of the |
|
|
|
01:02:57.410 --> 01:02:59.960 |
|
weights to be optimized and so. |
|
|
|
01:03:00.590 --> 01:03:02.920 |
|
You keep track of the total path length |
|
|
|
01:03:02.920 --> 01:03:03.649 |
|
of those weights. |
|
|
|
01:03:03.650 --> 01:03:05.399 |
|
So how have the weights changed |
|
|
|
01:03:05.399 --> 01:03:05.694 |
|
overtime? |
|
|
|
01:03:05.694 --> 01:03:08.117 |
|
And if the weights have changed a lot |
|
|
|
01:03:08.117 --> 01:03:10.830 |
|
overtime, then you reduce how much |
|
|
|
01:03:10.830 --> 01:03:12.120 |
|
you're going to move those particular |
|
|
|
01:03:12.120 --> 01:03:14.080 |
|
weights, and if they haven't changed |
|
|
|
01:03:14.080 --> 01:03:16.500 |
|
very much overtime, then you allow |
|
|
|
01:03:16.500 --> 01:03:17.730 |
|
those weights to move more. |
|
|
|
01:03:18.750 --> 01:03:20.310 |
|
So in terms of the math. |
|
|
|
01:03:21.220 --> 01:03:23.230 |
|
You keep track of this magnitude. |
|
|
|
01:03:23.230 --> 01:03:25.627 |
|
Is the path length, so it's just like |
|
|
|
01:03:25.627 --> 01:03:26.910 |
|
the length of these curves. |
|
|
|
01:03:27.820 --> 01:03:29.190 |
|
During the optimization. |
|
|
|
01:03:29.870 --> 01:03:31.470 |
|
And that's just the sum of squared |
|
|
|
01:03:31.470 --> 01:03:34.050 |
|
values of the Gradients square rooted. |
|
|
|
01:03:34.050 --> 01:03:36.316 |
|
So it's the Euclidean distance of your |
|
|
|
01:03:36.316 --> 01:03:39.249 |
|
Gradient of your Gradients of your |
|
|
|
01:03:39.250 --> 01:03:39.960 |
|
weight Gradient. |
|
|
|
01:03:41.520 --> 01:03:44.600 |
|
And then you normalize by that when |
|
|
|
01:03:44.600 --> 01:03:45.700 |
|
you're computing your Update. |
|
|
|
01:03:46.390 --> 01:03:48.500 |
|
And so in this case, for example, if |
|
|
|
01:03:48.500 --> 01:03:50.390 |
|
you don't do, you get the cyan ball |
|
|
|
01:03:50.390 --> 01:03:51.960 |
|
that rolls down in One Direction that's |
|
|
|
01:03:51.960 --> 01:03:53.666 |
|
the fastest direction, and then rolls |
|
|
|
01:03:53.666 --> 01:03:54.610 |
|
in the other direction. |
|
|
|
01:03:55.420 --> 01:03:57.580 |
|
And if you do it, you get a more direct |
|
|
|
01:03:57.580 --> 01:03:59.900 |
|
path to the final solution with the |
|
|
|
01:03:59.900 --> 01:04:00.390 |
|
white ball. |
|
|
|
01:04:04.430 --> 01:04:05.430 |
|
And then one. |
|
|
|
01:04:06.210 --> 01:04:08.436 |
|
The problem with that approach is that |
|
|
|
01:04:08.436 --> 01:04:10.210 |
|
your path lengths keep getting longer |
|
|
|
01:04:10.210 --> 01:04:12.331 |
|
and so your steps keep getting smaller |
|
|
|
01:04:12.331 --> 01:04:14.040 |
|
and smaller, and so it can take a |
|
|
|
01:04:14.040 --> 01:04:15.600 |
|
really long time to converge. |
|
|
|
01:04:15.600 --> 01:04:18.300 |
|
So RMSProp tries to deal with that root |
|
|
|
01:04:18.300 --> 01:04:19.370 |
|
means squared propagation. |
|
|
|
01:04:19.990 --> 01:04:21.450 |
|
By instead of doing it based on the |
|
|
|
01:04:21.450 --> 01:04:23.376 |
|
total path length, it's based on a |
|
|
|
01:04:23.376 --> 01:04:25.020 |
|
moving average of the path length, and |
|
|
|
01:04:25.020 --> 01:04:26.879 |
|
you can one way to do a moving average. |
|
|
|
01:04:27.570 --> 01:04:29.390 |
|
Is that you take the last value and |
|
|
|
01:04:29.390 --> 01:04:31.340 |
|
multiply it by epsilon and then you do |
|
|
|
01:04:31.340 --> 01:04:33.370 |
|
1 minus epsilon times the new value. |
|
|
|
01:04:33.370 --> 01:04:36.273 |
|
So if this is like 999, if epsilon is |
|
|
|
01:04:36.273 --> 01:04:38.970 |
|
999 then it will mostly reflect like |
|
|
|
01:04:38.970 --> 01:04:41.040 |
|
the recent observations of the Squared |
|
|
|
01:04:41.040 --> 01:04:41.410 |
|
value. |
|
|
|
01:04:42.590 --> 01:04:43.750 |
|
A moving average. |
|
|
|
01:04:44.360 --> 01:04:45.980 |
|
And then otherwise they normalization |
|
|
|
01:04:45.980 --> 01:04:46.500 |
|
is the same. |
|
|
|
01:04:47.670 --> 01:04:49.620 |
|
There are the green ball which is |
|
|
|
01:04:49.620 --> 01:04:51.520 |
|
RMSProp moves faster than white ball. |
|
|
|
01:04:52.870 --> 01:04:55.170 |
|
And finally, we get to Adam, which is |
|
|
|
01:04:55.170 --> 01:04:57.610 |
|
the most commonly used just Vanilla |
|
|
|
01:04:57.610 --> 01:04:58.110 |
|
SGD. |
|
|
|
01:04:58.110 --> 01:05:00.049 |
|
Plus, Momentum is commonly used, |
|
|
|
01:05:00.050 --> 01:05:01.430 |
|
especially by people that have really |
|
|
|
01:05:01.430 --> 01:05:01.990 |
|
big computers. |
|
|
|
01:05:02.790 --> 01:05:05.590 |
|
But by Adam is most commonly used if |
|
|
|
01:05:05.590 --> 01:05:07.200 |
|
you don't want to have to like mess too |
|
|
|
01:05:07.200 --> 01:05:09.394 |
|
much with your learning rate and other |
|
|
|
01:05:09.394 --> 01:05:10.740 |
|
and other parameters. |
|
|
|
01:05:10.740 --> 01:05:11.730 |
|
It's pretty robust. |
|
|
|
01:05:12.500 --> 01:05:16.860 |
|
So Adam is combining Momentum, so it's |
|
|
|
01:05:16.860 --> 01:05:18.260 |
|
got this Momentum term. |
|
|
|
01:05:19.120 --> 01:05:22.570 |
|
And also this RMSProp normalization |
|
|
|
01:05:22.570 --> 01:05:22.930 |
|
term. |
|
|
|
01:05:23.880 --> 01:05:26.590 |
|
And so it's kind of like regularizing |
|
|
|
01:05:26.590 --> 01:05:28.320 |
|
the directions that you move to try to |
|
|
|
01:05:28.320 --> 01:05:29.510 |
|
make sure that you're like paying |
|
|
|
01:05:29.510 --> 01:05:30.510 |
|
attention to all the weights. |
|
|
|
01:05:31.190 --> 01:05:33.312 |
|
And it's also incorporates some |
|
|
|
01:05:33.312 --> 01:05:33.664 |
|
momentum. |
|
|
|
01:05:33.664 --> 01:05:35.600 |
|
So the Momentum, not only does it get |
|
|
|
01:05:35.600 --> 01:05:37.140 |
|
you out of local minima, but it can |
|
|
|
01:05:37.140 --> 01:05:38.040 |
|
accelerate you. |
|
|
|
01:05:38.040 --> 01:05:39.970 |
|
So if you keep moving in the same |
|
|
|
01:05:39.970 --> 01:05:41.338 |
|
direction, you'll start moving faster |
|
|
|
01:05:41.338 --> 01:05:42.389 |
|
and faster and faster. |
|
|
|
01:05:43.330 --> 01:05:45.870 |
|
So these two things in combination are |
|
|
|
01:05:45.870 --> 01:05:48.770 |
|
helpful because the Momentum helps you |
|
|
|
01:05:48.770 --> 01:05:50.680 |
|
accelerate when you should be moving |
|
|
|
01:05:50.680 --> 01:05:51.340 |
|
faster. |
|
|
|
01:05:52.110 --> 01:05:55.750 |
|
And the regularization of this RMSProp |
|
|
|
01:05:55.750 --> 01:05:57.180 |
|
helps make sure that things don't get |
|
|
|
01:05:57.180 --> 01:05:58.100 |
|
too out of control. |
|
|
|
01:05:58.100 --> 01:05:58.760 |
|
So if you're like. |
|
|
|
01:05:59.470 --> 01:06:00.785 |
|
Really likes accelerating? |
|
|
|
01:06:00.785 --> 01:06:03.480 |
|
You don't like fly off into Nan Land? |
|
|
|
01:06:03.480 --> 01:06:06.720 |
|
You get normalized by your G mag before |
|
|
|
01:06:06.720 --> 01:06:07.430 |
|
you. |
|
|
|
01:06:07.600 --> 01:06:07.770 |
|
OK. |
|
|
|
01:06:08.390 --> 01:06:10.320 |
|
Before it gets like too crazy. |
|
|
|
01:06:11.520 --> 01:06:13.300 |
|
Otherwise you can imagine like with the |
|
|
|
01:06:13.300 --> 01:06:14.610 |
|
bowl you can be like. |
|
|
|
01:06:15.700 --> 01:06:17.820 |
|
And you're like fly off into like |
|
|
|
01:06:17.820 --> 01:06:18.490 |
|
Infinity. |
|
|
|
01:06:21.650 --> 01:06:23.430 |
|
And if you ever start seeing Nans and |
|
|
|
01:06:23.430 --> 01:06:24.680 |
|
your losses, that's probably what |
|
|
|
01:06:24.680 --> 01:06:24.960 |
|
happened. |
|
|
|
01:06:26.260 --> 01:06:26.770 |
|
|
|
|
|
01:06:27.690 --> 01:06:29.430 |
|
So there's some cool videos here. |
|
|
|
01:06:31.850 --> 01:06:34.910 |
|
So just showing like some races of |
|
|
|
01:06:34.910 --> 01:06:37.470 |
|
these different approaches and. |
|
|
|
01:06:40.290 --> 01:06:41.900 |
|
So I think let's see. |
|
|
|
01:06:44.810 --> 01:06:46.160 |
|
So they were on YouTube, so. |
|
|
|
01:06:47.090 --> 01:06:48.350 |
|
More of a pain to grab them. |
|
|
|
01:06:48.350 --> 01:06:49.900 |
|
The other ones are gifts, which is |
|
|
|
01:06:49.900 --> 01:06:50.210 |
|
nice. |
|
|
|
01:06:50.820 --> 01:06:53.430 |
|
That's just showing this is blue is. |
|
|
|
01:06:54.130 --> 01:06:55.770 |
|
Blue is. |
|
|
|
01:06:56.990 --> 01:06:57.680 |
|
Adam, yes. |
|
|
|
01:06:57.680 --> 01:06:58.030 |
|
Thank you. |
|
|
|
01:06:58.930 --> 01:07:00.750 |
|
So you can see that the blue is |
|
|
|
01:07:00.750 --> 01:07:02.020 |
|
actually able to find a better |
|
|
|
01:07:02.020 --> 01:07:04.060 |
|
solution, a lower point. |
|
|
|
01:07:04.060 --> 01:07:06.430 |
|
These are like loss manifolds, so if |
|
|
|
01:07:06.430 --> 01:07:08.445 |
|
you have like 2 weights, this is like |
|
|
|
01:07:08.445 --> 01:07:09.670 |
|
the loss as a function of those |
|
|
|
01:07:09.670 --> 01:07:09.930 |
|
weights. |
|
|
|
01:07:14.350 --> 01:07:15.850 |
|
So the optimization is trying to find |
|
|
|
01:07:15.850 --> 01:07:17.450 |
|
the lowest the weights that give you |
|
|
|
01:07:17.450 --> 01:07:18.160 |
|
the lowest loss. |
|
|
|
01:07:19.320 --> 01:07:20.200 |
|
Here's another example. |
|
|
|
01:07:20.200 --> 01:07:21.870 |
|
They all start at the same point so |
|
|
|
01:07:21.870 --> 01:07:23.090 |
|
that you can only see one ball, but |
|
|
|
01:07:23.090 --> 01:07:23.660 |
|
they're all there. |
|
|
|
01:07:26.580 --> 01:07:27.120 |
|
|
|
|
|
01:07:31.150 --> 01:07:33.400 |
|
The Momentum got there first, but both |
|
|
|
01:07:33.400 --> 01:07:35.600 |
|
Momentum and Adam got there at the end. |
|
|
|
01:07:35.600 --> 01:07:36.840 |
|
The other ones would have gotten there |
|
|
|
01:07:36.840 --> 01:07:38.260 |
|
too because that was an easy case, but |
|
|
|
01:07:38.260 --> 01:07:39.110 |
|
they just take longer. |
|
|
|
01:07:40.840 --> 01:07:41.910 |
|
Yeah, so anyway. |
|
|
|
01:07:44.100 --> 01:07:46.170 |
|
Any questions about Momentum about? |
|
|
|
01:07:47.160 --> 01:07:48.530 |
|
SGD momentum, Adam. |
|
|
|
01:07:50.550 --> 01:07:53.043 |
|
So I would say typically I see people |
|
|
|
01:07:53.043 --> 01:07:54.990 |
|
use SGD or atom. |
|
|
|
01:07:54.990 --> 01:07:58.323 |
|
And so in your homework we first say |
|
|
|
01:07:58.323 --> 01:07:59.009 |
|
use SGD. |
|
|
|
01:08:00.270 --> 01:08:01.570 |
|
Because it's the main one we taught. |
|
|
|
01:08:01.570 --> 01:08:03.090 |
|
But then when you try to like make it |
|
|
|
01:08:03.090 --> 01:08:04.920 |
|
better, I would probably switch to Adam |
|
|
|
01:08:04.920 --> 01:08:07.290 |
|
because it makes it like a lot, it's |
|
|
|
01:08:07.290 --> 01:08:09.080 |
|
less sensitive to Learning rates and |
|
|
|
01:08:09.080 --> 01:08:11.910 |
|
it's a mix optimization, a bit easier |
|
|
|
01:08:11.910 --> 01:08:13.190 |
|
for the Model designer. |
|
|
|
01:08:14.750 --> 01:08:16.360 |
|
All of that's handled by. |
|
|
|
01:08:16.360 --> 01:08:18.150 |
|
All you have to do is change SGD to |
|
|
|
01:08:18.150 --> 01:08:18.560 |
|
Adam. |
|
|
|
01:08:18.560 --> 01:08:20.350 |
|
There's not a lot that you have to do |
|
|
|
01:08:20.350 --> 01:08:22.050 |
|
in terms of the when typing keys. |
|
|
|
01:08:24.510 --> 01:08:25.430 |
|
All right, so. |
|
|
|
01:08:26.460 --> 01:08:27.250 |
|
Even with. |
|
|
|
01:08:28.820 --> 01:08:30.840 |
|
Even with ReLU and Adam optimization, |
|
|
|
01:08:30.840 --> 01:08:32.830 |
|
though, it was hard to get very Deep |
|
|
|
01:08:32.830 --> 01:08:34.840 |
|
Networks to work very well. |
|
|
|
01:08:35.840 --> 01:08:37.720 |
|
So there were Networks, this one going |
|
|
|
01:08:37.720 --> 01:08:39.690 |
|
deeper with convolutions where they |
|
|
|
01:08:39.690 --> 01:08:40.450 |
|
would. |
|
|
|
01:08:40.600 --> 01:08:42.130 |
|
They would. |
|
|
|
01:08:42.390 --> 01:08:44.860 |
|
And they would have losses at various |
|
|
|
01:08:44.860 --> 01:08:45.086 |
|
stages. |
|
|
|
01:08:45.086 --> 01:08:47.193 |
|
So you'd basically build build |
|
|
|
01:08:47.193 --> 01:08:48.820 |
|
classifiers off of branches of the |
|
|
|
01:08:48.820 --> 01:08:49.215 |
|
network. |
|
|
|
01:08:49.215 --> 01:08:51.815 |
|
At layer five and seven and nine, you'd |
|
|
|
01:08:51.815 --> 01:08:53.609 |
|
have a whole bunch of classifiers so |
|
|
|
01:08:53.610 --> 01:08:55.100 |
|
that each of these can like feed. |
|
|
|
01:08:55.960 --> 01:08:58.389 |
|
Gradients into the earlier parts of the |
|
|
|
01:08:58.390 --> 01:09:00.465 |
|
network, because if you didn't do this |
|
|
|
01:09:00.465 --> 01:09:02.150 |
|
and you just had the Classification |
|
|
|
01:09:02.150 --> 01:09:04.620 |
|
here the Gradient, you'd have this |
|
|
|
01:09:04.620 --> 01:09:06.676 |
|
vanishing gradient problem where like |
|
|
|
01:09:06.676 --> 01:09:10.410 |
|
the values like chop off like kill some |
|
|
|
01:09:10.410 --> 01:09:12.470 |
|
of your Gradients and no Gradients are |
|
|
|
01:09:12.470 --> 01:09:13.630 |
|
getting back to the beginning, so |
|
|
|
01:09:13.630 --> 01:09:14.690 |
|
you're not able to optimize. |
|
|
|
01:09:15.760 --> 01:09:18.350 |
|
They do these really heavy solutions |
|
|
|
01:09:18.350 --> 01:09:19.440 |
|
where you train a whole bunch of |
|
|
|
01:09:19.440 --> 01:09:21.410 |
|
classifiers and each one is helping to |
|
|
|
01:09:21.410 --> 01:09:22.960 |
|
inform the previous layers. |
|
|
|
01:09:25.620 --> 01:09:27.710 |
|
Even with that, people are finding that |
|
|
|
01:09:27.710 --> 01:09:29.390 |
|
they were running out of steam, like |
|
|
|
01:09:29.390 --> 01:09:31.660 |
|
you couldn't build deeper, a lot bigger |
|
|
|
01:09:31.660 --> 01:09:31.930 |
|
Networks. |
|
|
|
01:09:31.930 --> 01:09:33.190 |
|
There were, there were still |
|
|
|
01:09:33.190 --> 01:09:36.800 |
|
Improvements, VGG and Google, LeNet, |
|
|
|
01:09:36.800 --> 01:09:39.040 |
|
but they weren't able to get like |
|
|
|
01:09:39.040 --> 01:09:40.060 |
|
really Deep Networks. |
|
|
|
01:09:40.860 --> 01:09:43.014 |
|
And so it wasn't clear like, was the |
|
|
|
01:09:43.014 --> 01:09:44.660 |
|
problem that the Deep Networks were |
|
|
|
01:09:44.660 --> 01:09:46.020 |
|
overfitting the training data, they |
|
|
|
01:09:46.020 --> 01:09:47.676 |
|
were just too powerful or was the |
|
|
|
01:09:47.676 --> 01:09:49.716 |
|
problem that we couldn't just that we |
|
|
|
01:09:49.716 --> 01:09:51.850 |
|
just couldn't optimize them or some |
|
|
|
01:09:51.850 --> 01:09:52.470 |
|
combination? |
|
|
|
01:09:53.900 --> 01:09:56.910 |
|
So my question to you is, what is a way |
|
|
|
01:09:56.910 --> 01:09:58.630 |
|
that we could answer this question if |
|
|
|
01:09:58.630 --> 01:10:00.080 |
|
we don't know whether the Networks are |
|
|
|
01:10:00.080 --> 01:10:01.430 |
|
overfitting the training data? |
|
|
|
01:10:02.120 --> 01:10:04.130 |
|
Or whether we're just having problems |
|
|
|
01:10:04.130 --> 01:10:05.130 |
|
optimizing them. |
|
|
|
01:10:05.130 --> 01:10:06.040 |
|
In other words, they're like |
|
|
|
01:10:06.040 --> 01:10:07.380 |
|
essentially underfitting the training |
|
|
|
01:10:07.380 --> 01:10:07.570 |
|
data. |
|
|
|
01:10:08.360 --> 01:10:11.090 |
|
What would we do to diagnose that? |
|
|
|
01:10:26.640 --> 01:10:28.400 |
|
So we want to. |
|
|
|
01:10:28.400 --> 01:10:30.460 |
|
So the answer was compare the Training |
|
|
|
01:10:30.460 --> 01:10:31.680 |
|
area and the test error. |
|
|
|
01:10:31.680 --> 01:10:32.000 |
|
Yes. |
|
|
|
01:10:32.000 --> 01:10:33.930 |
|
So we just we basically want to look at |
|
|
|
01:10:33.930 --> 01:10:34.105 |
|
the. |
|
|
|
01:10:34.105 --> 01:10:35.480 |
|
We need to look at the training error |
|
|
|
01:10:35.480 --> 01:10:35.960 |
|
as well. |
|
|
|
01:10:36.880 --> 01:10:39.550 |
|
And so that's what he had all did. |
|
|
|
01:10:40.170 --> 01:10:42.660 |
|
This is the Resnet paper, which has |
|
|
|
01:10:42.660 --> 01:10:44.980 |
|
been cited 150,000 times. |
|
|
|
01:10:46.020 --> 01:10:46.590 |
|
So. |
|
|
|
01:10:47.320 --> 01:10:49.668 |
|
They plot the Training error and they |
|
|
|
01:10:49.668 --> 01:10:52.090 |
|
plot the test error and they say, look, |
|
|
|
01:10:52.090 --> 01:10:53.910 |
|
you have a model that got bigger from |
|
|
|
01:10:53.910 --> 01:10:56.420 |
|
20 to 56 and the Training error went up |
|
|
|
01:10:56.420 --> 01:10:56.930 |
|
by a lot. |
|
|
|
01:10:57.890 --> 01:10:59.210 |
|
So that's pretty weird. |
|
|
|
01:10:59.210 --> 01:11:01.335 |
|
Like you have a bigger model, it has to |
|
|
|
01:11:01.335 --> 01:11:03.410 |
|
have less bias in like traditional |
|
|
|
01:11:03.410 --> 01:11:03.840 |
|
terms. |
|
|
|
01:11:04.460 --> 01:11:06.776 |
|
But we're getting higher error in |
|
|
|
01:11:06.776 --> 01:11:08.469 |
|
training, not just in test. |
|
|
|
01:11:08.470 --> 01:11:09.742 |
|
And if you have higher error in |
|
|
|
01:11:09.742 --> 01:11:11.300 |
|
Training, that also will mean that you |
|
|
|
01:11:11.300 --> 01:11:12.680 |
|
probably have higher error in test, |
|
|
|
01:11:12.680 --> 01:11:14.142 |
|
because the test error is the Training |
|
|
|
01:11:14.142 --> 01:11:16.060 |
|
error plus a generalization error. |
|
|
|
01:11:16.060 --> 01:11:17.192 |
|
So this is a test. |
|
|
|
01:11:17.192 --> 01:11:18.050 |
|
This is the train. |
|
|
|
01:11:19.610 --> 01:11:20.760 |
|
So they have like a couple |
|
|
|
01:11:20.760 --> 01:11:21.580 |
|
explanations. |
|
|
|
01:11:22.570 --> 01:11:24.670 |
|
One is the Vanishing Gradients problem. |
|
|
|
01:11:24.670 --> 01:11:27.440 |
|
So here is for example a VGG 18. |
|
|
|
01:11:28.190 --> 01:11:28.870 |
|
Network. |
|
|
|
01:11:28.870 --> 01:11:32.616 |
|
Here's a 34 layer like network that is |
|
|
|
01:11:32.616 --> 01:11:34.980 |
|
convolutions and full of convolutions |
|
|
|
01:11:34.980 --> 01:11:36.070 |
|
and downsample et cetera. |
|
|
|
01:11:37.180 --> 01:11:38.610 |
|
The one problem is what's called |
|
|
|
01:11:38.610 --> 01:11:40.510 |
|
Vanishing Gradients, that the early |
|
|
|
01:11:40.510 --> 01:11:42.493 |
|
weights have a long path to reach the |
|
|
|
01:11:42.493 --> 01:11:42.766 |
|
output. |
|
|
|
01:11:42.766 --> 01:11:45.350 |
|
So when we talked about back |
|
|
|
01:11:45.350 --> 01:11:47.242 |
|
propagation, remember that the early |
|
|
|
01:11:47.242 --> 01:11:49.480 |
|
weights have this product of weight |
|
|
|
01:11:49.480 --> 01:11:51.393 |
|
terms in them. |
|
|
|
01:11:51.393 --> 01:11:56.170 |
|
So if any as the weights are, if the |
|
|
|
01:11:56.170 --> 01:11:59.390 |
|
output of the later nodes are zero, |
|
|
|
01:11:59.390 --> 01:12:02.160 |
|
then the earlier Gradients get cut off. |
|
|
|
01:12:04.390 --> 01:12:06.200 |
|
So it's hard to optimize the early |
|
|
|
01:12:06.200 --> 01:12:08.120 |
|
layers and you can do the multiple |
|
|
|
01:12:08.120 --> 01:12:09.820 |
|
stages of supervision like Google in |
|
|
|
01:12:09.820 --> 01:12:13.720 |
|
it, but it's complicated and time |
|
|
|
01:12:13.720 --> 01:12:14.794 |
|
consuming to do. |
|
|
|
01:12:14.794 --> 01:12:16.650 |
|
So it's very heavy Training. |
|
|
|
01:12:17.440 --> 01:12:19.480 |
|
The other problem is information |
|
|
|
01:12:19.480 --> 01:12:20.150 |
|
propagation. |
|
|
|
01:12:20.840 --> 01:12:22.350 |
|
So you can think of a Multi layer |
|
|
|
01:12:22.350 --> 01:12:24.280 |
|
network as at each stage of the network |
|
|
|
01:12:24.280 --> 01:12:26.005 |
|
you're propagating the information from |
|
|
|
01:12:26.005 --> 01:12:28.290 |
|
the previous layer and then doing some |
|
|
|
01:12:28.290 --> 01:12:30.180 |
|
additional analysis on top of it to |
|
|
|
01:12:30.180 --> 01:12:33.050 |
|
hopefully add some or useful features |
|
|
|
01:12:33.050 --> 01:12:34.620 |
|
for the final Prediction. |
|
|
|
01:12:35.210 --> 01:12:37.370 |
|
So you start with the Input, which is a |
|
|
|
01:12:37.370 --> 01:12:39.440 |
|
complete representation of the data, |
|
|
|
01:12:39.440 --> 01:12:40.910 |
|
all the information's there. |
|
|
|
01:12:40.910 --> 01:12:42.895 |
|
And then you transform it with the next |
|
|
|
01:12:42.895 --> 01:12:44.651 |
|
layer and transform it with the next |
|
|
|
01:12:44.651 --> 01:12:46.408 |
|
layer and transform it with the next |
|
|
|
01:12:46.408 --> 01:12:46.659 |
|
layer. |
|
|
|
01:12:46.659 --> 01:12:48.330 |
|
And each time you have to try to |
|
|
|
01:12:48.330 --> 01:12:50.250 |
|
maintain the information that's in the |
|
|
|
01:12:50.250 --> 01:12:53.150 |
|
previous layer, but also put it into a |
|
|
|
01:12:53.150 --> 01:12:55.290 |
|
form that's more useful for Prediction. |
|
|
|
01:12:56.540 --> 01:12:57.070 |
|
And. |
|
|
|
01:12:57.750 --> 01:12:59.620 |
|
The and so. |
|
|
|
01:13:00.350 --> 01:13:02.860 |
|
If you initialize the weights to 0, for |
|
|
|
01:13:02.860 --> 01:13:04.516 |
|
example, then it's not retaining the |
|
|
|
01:13:04.516 --> 01:13:05.900 |
|
information in the previous layer, so |
|
|
|
01:13:05.900 --> 01:13:07.555 |
|
it has to actually learn something just |
|
|
|
01:13:07.555 --> 01:13:09.630 |
|
to reproduce that original information. |
|
|
|
01:13:11.540 --> 01:13:13.850 |
|
So their solution to this and I'll stop |
|
|
|
01:13:13.850 --> 01:13:16.260 |
|
with this slide and I'll continue with |
|
|
|
01:13:16.260 --> 01:13:17.660 |
|
this in the vision portion since I'm |
|
|
|
01:13:17.660 --> 01:13:18.740 |
|
kind of like getting into vision |
|
|
|
01:13:18.740 --> 01:13:21.060 |
|
anyway, but let me tell you about this |
|
|
|
01:13:21.060 --> 01:13:21.730 |
|
module. |
|
|
|
01:13:22.390 --> 01:13:23.920 |
|
The. |
|
|
|
01:13:24.090 --> 01:13:26.500 |
|
Their solution in this is the RESNET |
|
|
|
01:13:26.500 --> 01:13:27.110 |
|
module. |
|
|
|
01:13:28.430 --> 01:13:31.580 |
|
So they use what's called a skip or |
|
|
|
01:13:31.580 --> 01:13:34.990 |
|
shortcut connection around two to three |
|
|
|
01:13:34.990 --> 01:13:35.950 |
|
layer MLP. |
|
|
|
01:13:35.950 --> 01:13:36.650 |
|
So you. |
|
|
|
01:13:37.530 --> 01:13:39.935 |
|
Your Input goes into a weight layer, a |
|
|
|
01:13:39.935 --> 01:13:42.830 |
|
linear layer array, Lau another linear |
|
|
|
01:13:42.830 --> 01:13:45.370 |
|
layer and then you add back the input |
|
|
|
01:13:45.370 --> 01:13:46.200 |
|
to the end. |
|
|
|
01:13:46.880 --> 01:13:49.020 |
|
And this allows the Gradients to flow |
|
|
|
01:13:49.020 --> 01:13:50.580 |
|
back through this because this is just |
|
|
|
01:13:50.580 --> 01:13:51.810 |
|
F of X = X. |
|
|
|
01:13:51.810 --> 01:13:54.295 |
|
So Gradients can flow straight around |
|
|
|
01:13:54.295 --> 01:13:55.660 |
|
this network if they need to. |
|
|
|
01:13:56.320 --> 01:13:58.680 |
|
As well as flowing through this way and |
|
|
|
01:13:58.680 --> 01:14:01.390 |
|
also this guy, even if these weights |
|
|
|
01:14:01.390 --> 01:14:03.360 |
|
are zero, that information is still |
|
|
|
01:14:03.360 --> 01:14:06.120 |
|
preserved because you add X to the |
|
|
|
01:14:06.120 --> 01:14:08.760 |
|
output of these layers and so each |
|
|
|
01:14:08.760 --> 01:14:10.890 |
|
module only needs to like add |
|
|
|
01:14:10.890 --> 01:14:12.070 |
|
information, doesn't need to worry |
|
|
|
01:14:12.070 --> 01:14:13.670 |
|
about reproducing the previous |
|
|
|
01:14:13.670 --> 01:14:14.350 |
|
information. |
|
|
|
01:14:15.370 --> 01:14:17.280 |
|
And I'm just going to show you one |
|
|
|
01:14:17.280 --> 01:14:19.550 |
|
thing so that so that caused this |
|
|
|
01:14:19.550 --> 01:14:20.690 |
|
revolution of Depth. |
|
|
|
01:14:21.440 --> 01:14:24.390 |
|
Where in 2012 the winner of ImageNet |
|
|
|
01:14:24.390 --> 01:14:27.817 |
|
was 8 layers, in 2014 it was 19 layers. |
|
|
|
01:14:27.817 --> 01:14:31.570 |
|
In 2015 it was Resnet with 152 layers. |
|
|
|
01:14:32.410 --> 01:14:34.530 |
|
So this allowed you to basically train |
|
|
|
01:14:34.530 --> 01:14:38.870 |
|
networks of any depth, and you could |
|
|
|
01:14:38.870 --> 01:14:40.470 |
|
even have 1000 layer network if you |
|
|
|
01:14:40.470 --> 01:14:42.270 |
|
wanted and you'd be able to train it. |
|
|
|
01:14:43.020 --> 01:14:44.540 |
|
And the reason is because the data can |
|
|
|
01:14:44.540 --> 01:14:46.410 |
|
just flow straight through these skip |
|
|
|
01:14:46.410 --> 01:14:47.630 |
|
connections all the way to the |
|
|
|
01:14:47.630 --> 01:14:48.170 |
|
beginning. |
|
|
|
01:14:48.170 --> 01:14:49.930 |
|
So it's actually like you can optimize |
|
|
|
01:14:49.930 --> 01:14:51.990 |
|
all these blocks like separately from |
|
|
|
01:14:51.990 --> 01:14:52.450 |
|
each other. |
|
|
|
01:14:53.060 --> 01:14:54.395 |
|
And it causes. |
|
|
|
01:14:54.395 --> 01:14:56.540 |
|
It also causes an interesting behavior |
|
|
|
01:14:56.540 --> 01:14:58.430 |
|
where they kind of act as ensembles |
|
|
|
01:14:58.430 --> 01:15:00.670 |
|
because the information can like skip |
|
|
|
01:15:00.670 --> 01:15:01.710 |
|
sections of the network. |
|
|
|
01:15:01.710 --> 01:15:03.230 |
|
So you can basically have like separate |
|
|
|
01:15:03.230 --> 01:15:04.400 |
|
predictors that are learned and |
|
|
|
01:15:04.400 --> 01:15:05.060 |
|
recombined. |
|
|
|
01:15:05.840 --> 01:15:07.570 |
|
And so with larger models, you actually |
|
|
|
01:15:07.570 --> 01:15:10.680 |
|
get a property of reducing the variance |
|
|
|
01:15:10.680 --> 01:15:12.490 |
|
instead of increasing the variance, |
|
|
|
01:15:12.490 --> 01:15:13.840 |
|
even though you have more parameters in |
|
|
|
01:15:13.840 --> 01:15:14.780 |
|
your model. |
|
|
|
01:15:14.780 --> 01:15:17.280 |
|
That's a little bit of a speculation, |
|
|
|
01:15:17.280 --> 01:15:18.660 |
|
but that seems to be the behavior. |
|
|
|
01:15:19.820 --> 01:15:23.556 |
|
All right, so Tuesday I'm going to do |
|
|
|
01:15:23.556 --> 01:15:25.935 |
|
like another like consolidation review |
|
|
|
01:15:25.935 --> 01:15:26.580 |
|
do. |
|
|
|
01:15:26.580 --> 01:15:28.590 |
|
If you have anything specific you want |
|
|
|
01:15:28.590 --> 01:15:30.620 |
|
me to cover about the questions or |
|
|
|
01:15:30.620 --> 01:15:33.210 |
|
concepts, post it on campus wire. |
|
|
|
01:15:33.210 --> 01:15:34.620 |
|
You can find the posts there. |
|
|
|
01:15:34.620 --> 01:15:35.260 |
|
Reply to it. |
|
|
|
01:15:36.030 --> 01:15:39.120 |
|
And then I'm going to continue talking |
|
|
|
01:15:39.120 --> 01:15:40.560 |
|
about Deep Networks with computer |
|
|
|
01:15:40.560 --> 01:15:43.160 |
|
vision examples on Thursday. |
|
|
|
01:15:43.160 --> 01:15:44.050 |
|
So thank you. |
|
|
|
01:15:44.050 --> 01:15:44.820 |
|
Have a good weekend. |
|
|
|
|