whisper-finetuning-for-asee / CS_441_2023_Spring_February_23,_2023.vtt
classen3's picture
Imported CS 441 audio/transcripts
a67be9a verified
WEBVTT Kind: captions; Language: en-US
NOTE
Created on 2024-02-07T21:00:00.0962747Z by ClassTranscribe
00:01:35.120 --> 00:01:36.820
Alright, good morning everybody.
00:01:41.410 --> 00:01:42.020
So.
00:01:42.870 --> 00:01:44.460
And so this is where we are in this
00:01:44.460 --> 00:01:45.100
semester.
00:01:45.790 --> 00:01:49.530
Just to kind of touch base.
00:01:50.420 --> 00:01:51.740
So we finished.
00:01:51.740 --> 00:01:53.770
So we just finished the section on
00:01:53.770 --> 00:01:55.980
supervised learning fundamentals, so.
00:01:57.120 --> 00:01:59.720
Their previous section was basically
00:01:59.720 --> 00:02:01.720
talking about machine learning models
00:02:01.720 --> 00:02:02.480
in general.
00:02:02.480 --> 00:02:04.052
We talked about a big variety of
00:02:04.052 --> 00:02:05.310
machine learning models and the basic
00:02:05.310 --> 00:02:05.970
concepts.
00:02:07.260 --> 00:02:08.910
Your next we're going to talk about the
00:02:08.910 --> 00:02:11.100
application of some of those models,
00:02:11.100 --> 00:02:13.520
particularly deep learning models to
00:02:13.520 --> 00:02:16.510
vision and language primarily first.
00:02:17.720 --> 00:02:19.690
So I'm going to talk about deep
00:02:19.690 --> 00:02:21.820
networks and computer vision today.
00:02:22.620 --> 00:02:24.790
I'll talk about language models like
00:02:24.790 --> 00:02:28.380
how to represent words and probably get
00:02:28.380 --> 00:02:30.580
into Transformers a little bit on
00:02:30.580 --> 00:02:31.120
Tuesday.
00:02:32.000 --> 00:02:34.010
And then talk about the use of
00:02:34.010 --> 00:02:35.690
Transformers, which is just a different
00:02:35.690 --> 00:02:38.490
kind of deep network in language and
00:02:38.490 --> 00:02:40.940
vision on Thursday, the following
00:02:40.940 --> 00:02:41.420
Thursday.
00:02:42.180 --> 00:02:46.260
Then clip and GPT 3, which are two like
00:02:46.260 --> 00:02:50.480
known foundation models that also build
00:02:50.480 --> 00:02:52.260
on the previous concepts.
00:02:53.490 --> 00:02:56.770
And then the so in terms of
00:02:56.770 --> 00:02:58.320
assignments, remember that homework two
00:02:58.320 --> 00:02:59.050
is due on Monday.
00:03:00.110 --> 00:03:03.660
The exam is on March 9th, Thursday, and
00:03:03.660 --> 00:03:04.450
again, you don't.
00:03:04.450 --> 00:03:05.750
You should not come to class.
00:03:05.750 --> 00:03:06.980
I won't be here.
00:03:06.980 --> 00:03:08.050
Nobody needs to be here.
00:03:08.820 --> 00:03:11.570
It's take it's a take home or it's at
00:03:11.570 --> 00:03:14.475
home and it's open book and I'll send
00:03:14.475 --> 00:03:14.760
a.
00:03:16.330 --> 00:03:19.550
Sometime like next week I'll send a
00:03:19.550 --> 00:03:22.230
campus wire message with more details,
00:03:22.230 --> 00:03:23.990
but no new information really.
00:03:26.170 --> 00:03:27.780
So this is the Thursday before spring
00:03:27.780 --> 00:03:29.406
break, then there's spring break.
00:03:29.406 --> 00:03:31.730
Then I'll talk about how we can
00:03:31.730 --> 00:03:34.576
different ways to adapt like networks
00:03:34.576 --> 00:03:38.070
or adapt methods to new domains or new
00:03:38.070 --> 00:03:38.530
tasks.
00:03:40.400 --> 00:03:43.910
Talk about like some of the moral and
00:03:43.910 --> 00:03:48.160
ethical issues that surround AI and
00:03:48.160 --> 00:03:48.510
learning.
00:03:49.680 --> 00:03:50.750
And then?
00:03:50.870 --> 00:03:54.063
And then some issues around data and
00:03:54.063 --> 00:03:54.980
data sets.
00:03:54.980 --> 00:03:57.830
And then the next section is on parent
00:03:57.830 --> 00:03:58.405
discovery.
00:03:58.405 --> 00:04:00.590
So that's focused on unsupervised
00:04:00.590 --> 00:04:03.300
methods where you don't have a ground
00:04:03.300 --> 00:04:05.068
truth for what you're trying to
00:04:05.068 --> 00:04:05.391
predict.
00:04:05.391 --> 00:04:07.420
And you may not be trying to predict
00:04:07.420 --> 00:04:08.910
anything, you may just be organizing
00:04:08.910 --> 00:04:09.723
the data.
00:04:09.723 --> 00:04:12.490
So we'll talk about clustering, missing
00:04:12.490 --> 00:04:14.290
data problems, and the M algorithm.
00:04:15.340 --> 00:04:17.400
How to estimate probabilities?
00:04:17.400 --> 00:04:20.890
Data visualization most likely.
00:04:20.890 --> 00:04:22.810
Not completely sure about these two
00:04:22.810 --> 00:04:25.270
topics yet, but I may do topic modeling
00:04:25.270 --> 00:04:28.150
which is another language technique and
00:04:28.150 --> 00:04:31.180
CCA canonical correlation analysis.
00:04:32.240 --> 00:04:35.210
And then there's going to be more
00:04:35.210 --> 00:04:37.860
applications and special topics.
00:04:37.860 --> 00:04:40.450
So I'm planning to talk about audio.
00:04:40.450 --> 00:04:43.020
I think Josh Levine, one of the Texas,
00:04:43.020 --> 00:04:45.210
is going to talk about reinforcement
00:04:45.210 --> 00:04:45.849
learning, I think.
00:04:46.910 --> 00:04:50.180
And then I'll talk about the difference
00:04:50.180 --> 00:04:52.500
between machine learning practice and
00:04:52.500 --> 00:04:53.770
theory because.
00:04:54.750 --> 00:04:56.160
Kind of like what you focus on is
00:04:56.160 --> 00:04:57.500
pretty different when you go out and
00:04:57.500 --> 00:04:58.910
apply machine learning versus when
00:04:58.910 --> 00:05:00.520
you're trying to develop algorithms or
00:05:00.520 --> 00:05:02.310
you're reading papers, what they're
00:05:02.310 --> 00:05:02.840
focused on.
00:05:04.570 --> 00:05:05.680
And then this will be your wrap up.
00:05:07.650 --> 00:05:09.853
And then it's cut off here, but at the
00:05:09.853 --> 00:05:12.436
if it had more in lines, you would have
00:05:12.436 --> 00:05:15.500
the final project is due on that
00:05:15.500 --> 00:05:16.600
following day.
00:05:16.600 --> 00:05:18.110
And then I think the exam, if I
00:05:18.110 --> 00:05:19.835
remember correctly, is May 9th, the
00:05:19.835 --> 00:05:21.360
final exam which is also.
00:05:22.310 --> 00:05:23.990
Take at Home, open book.
00:05:28.350 --> 00:05:31.020
And I've said it before, but some
00:05:31.020 --> 00:05:34.850
people have been asking you if you need
00:05:34.850 --> 00:05:36.170
to reach like a certain number of
00:05:36.170 --> 00:05:36.820
project points.
00:05:36.820 --> 00:05:38.230
So if you're in the graduate section
00:05:38.230 --> 00:05:39.780
it's now or if you're in the four
00:05:39.780 --> 00:05:42.640
credit section, it's now 525 total
00:05:42.640 --> 00:05:42.930
points.
00:05:43.850 --> 00:05:45.200
If you're in the three credit section,
00:05:45.200 --> 00:05:46.720
it's 425 points.
00:05:47.340 --> 00:05:49.760
And you can make those points out of
00:05:49.760 --> 00:05:51.260
like multiple homeworks or a
00:05:51.260 --> 00:05:52.985
combination of the homework and the
00:05:52.985 --> 00:05:53.680
final project.
00:05:53.680 --> 00:05:57.114
So if you don't want to do the final
00:05:57.114 --> 00:05:59.600
project, you can earn extra points on
00:05:59.600 --> 00:06:01.103
the homeworks and skip the final
00:06:01.103 --> 00:06:01.389
project.
00:06:02.220 --> 00:06:04.210
But you're also welcome to do more
00:06:04.210 --> 00:06:05.580
points than you actually need for the
00:06:05.580 --> 00:06:06.110
grade.
00:06:06.110 --> 00:06:08.102
So like, even if you didn't need to do
00:06:08.102 --> 00:06:10.239
the final project, but if you want to
00:06:10.240 --> 00:06:12.312
because you want to learn, or you want
00:06:12.312 --> 00:06:14.085
to like, get feedback on a project you
00:06:14.085 --> 00:06:15.350
want to do, then you're welcome to do
00:06:15.350 --> 00:06:15.500
that.
00:06:20.070 --> 00:06:20.500
Question.
00:06:36.420 --> 00:06:38.100
You can only use.
00:06:38.100 --> 00:06:39.770
You can only get a total.
00:06:39.770 --> 00:06:41.427
So the question is whether you can get
00:06:41.427 --> 00:06:42.770
extra apply extra credit on the
00:06:42.770 --> 00:06:44.050
projects to the exam.
00:06:44.680 --> 00:06:51.800
You can only apply 15 out of 425 or 525
00:06:51.800 --> 00:06:53.210
to the exam.
00:06:53.210 --> 00:06:53.580
Sort of.
00:06:54.210 --> 00:07:01.010
So if you got 440 points total and
00:07:01.010 --> 00:07:03.190
you're in the three credit version then
00:07:03.190 --> 00:07:06.454
your project score would be 440 / 425
00:07:06.454 --> 00:07:09.786
and so it would be like more than 100%.
00:07:09.786 --> 00:07:11.570
So that's essentially like giving you
00:07:11.570 --> 00:07:12.710
credit towards the exam.
00:07:12.710 --> 00:07:15.340
But I limit it because I don't want
00:07:15.340 --> 00:07:16.618
because the exams complementary.
00:07:16.618 --> 00:07:19.760
So I don't want like somebody to just.
00:07:20.390 --> 00:07:23.190
Do a lot of homework and then bomb the
00:07:23.190 --> 00:07:25.790
exam, because I would still like not be
00:07:25.790 --> 00:07:27.382
very confident that they understand a
00:07:27.382 --> 00:07:28.690
lot of the concepts if they did that.
00:07:32.020 --> 00:07:32.760
OK.
00:07:32.760 --> 00:07:36.500
So I'm going to talk about the Imagenet
00:07:36.500 --> 00:07:38.110
challenge and a little bit more detail,
00:07:38.110 --> 00:07:39.280
which was like really?
00:07:39.990 --> 00:07:42.710
A watershed moment in data set for
00:07:42.710 --> 00:07:43.710
vision and deep learning.
00:07:44.890 --> 00:07:46.820
Then I'll talk, I'll go into more
00:07:46.820 --> 00:07:48.170
detail about the RESNET model.
00:07:49.350 --> 00:07:52.460
I'll talk about how we can adapt pre
00:07:52.460 --> 00:07:54.075
trained network to new tasks, which is
00:07:54.075 --> 00:07:55.270
a really common thing to do.
00:07:56.200 --> 00:07:59.480
And then about the mask R CNN line of
00:07:59.480 --> 00:08:01.650
object detection and segmentation,
00:08:01.650 --> 00:08:04.520
which is a really commonly used system.
00:08:05.430 --> 00:08:07.400
By non vision researchers as well if
00:08:07.400 --> 00:08:08.910
you're trying to detect things in
00:08:08.910 --> 00:08:09.310
images.
00:08:10.010 --> 00:08:12.450
And then very briefly about the unit
00:08:12.450 --> 00:08:13.120
architecture.
00:08:15.820 --> 00:08:18.660
So the Imagenet challenge was really,
00:08:18.660 --> 00:08:20.880
at the time, a very unique data set in
00:08:20.880 --> 00:08:24.362
the scale of the number of classes and
00:08:24.362 --> 00:08:25.990
the number of images that were labeled.
00:08:26.710 --> 00:08:30.150
So there's 20 in total, 22,000
00:08:30.150 --> 00:08:32.690
categories and 15 million images.
00:08:32.690 --> 00:08:35.570
Initially, the challenge was for 1000
00:08:35.570 --> 00:08:38.050
of the categories with a subset of
00:08:38.050 --> 00:08:40.720
those images, but now there's a there's
00:08:40.720 --> 00:08:43.100
also an image net 22K data set that
00:08:43.100 --> 00:08:44.230
people sometimes use.
00:08:45.570 --> 00:08:48.120
For training models as well.
00:08:49.560 --> 00:08:50.890
So how did they get this data?
00:08:51.650 --> 00:08:53.910
So they started with Wordnet to find to
00:08:53.910 --> 00:08:57.573
get a set of nouns that they could that
00:08:57.573 --> 00:08:59.380
they could use for their classes.
00:08:59.380 --> 00:09:03.380
So Wordnet was this was like a graph
00:09:03.380 --> 00:09:04.730
structure of words and their
00:09:04.730 --> 00:09:07.330
relationships that if I remember
00:09:07.330 --> 00:09:08.520
correctly it was from Princeton.
00:09:09.390 --> 00:09:11.850
And so they just basically like mine
00:09:11.850 --> 00:09:13.240
Wordnet ticket.
00:09:14.400 --> 00:09:16.490
To get a bunch of different nouns,
00:09:16.490 --> 00:09:17.125
German shepherd.
00:09:17.125 --> 00:09:19.000
And so they're like descending like
00:09:19.000 --> 00:09:21.205
several levels into the Wordnet tree
00:09:21.205 --> 00:09:23.580
and pulling nouns from there that could
00:09:23.580 --> 00:09:24.720
be visually identified.
00:09:27.990 --> 00:09:31.400
Then for each of those nouns they'll do
00:09:31.400 --> 00:09:34.470
they do a like a Google image search I
00:09:34.470 --> 00:09:37.170
think it was, and download a bunch of
00:09:37.170 --> 00:09:40.245
images that are like the top hits for
00:09:40.245 --> 00:09:40.880
that noun.
00:09:41.660 --> 00:09:44.540
So in consequence, these tend to be
00:09:44.540 --> 00:09:47.130
like pretty like relatively like easy
00:09:47.130 --> 00:09:48.670
examples of those nouns.
00:09:49.540 --> 00:09:51.142
So they for example, when you search
00:09:51.142 --> 00:09:52.710
for German Shepherd, most of them are
00:09:52.710 --> 00:09:54.479
just like pictures of a German shepherd
00:09:54.480 --> 00:09:56.050
rather than pictures that happen to
00:09:56.050 --> 00:09:57.000
have a German shepherd in it.
00:09:58.360 --> 00:10:01.120
But again the aim is to classify is
00:10:01.120 --> 00:10:03.300
going to be classified each image into
00:10:03.300 --> 00:10:04.350
a particular category.
00:10:05.110 --> 00:10:06.500
When you download stuff, you're going
00:10:06.500 --> 00:10:07.850
to also get other random things.
00:10:07.850 --> 00:10:09.660
So they have a Dalmatian here, there's
00:10:09.660 --> 00:10:12.450
a sketch, there's a picture of Germany.
00:10:14.320 --> 00:10:17.110
And so then they need to clean up this
00:10:17.110 --> 00:10:17.495
data.
00:10:17.495 --> 00:10:19.080
So you want to try to, as much as
00:10:19.080 --> 00:10:21.510
possible, remove all the images that
00:10:21.510 --> 00:10:22.880
don't actually correspond to German
00:10:22.880 --> 00:10:23.360
Shepherd.
00:10:25.440 --> 00:10:27.290
So they actually, they tried doing this
00:10:27.290 --> 00:10:27.770
many ways.
00:10:27.770 --> 00:10:29.330
They tried doing it themselves.
00:10:29.330 --> 00:10:31.280
It was just like way too big of a task.
00:10:31.280 --> 00:10:35.280
They tried getting 1000 like people in
00:10:35.280 --> 00:10:36.080
Princeton to do it.
00:10:36.080 --> 00:10:37.020
That was still too big.
00:10:37.760 --> 00:10:40.520
So at the end they used Amazon
00:10:40.520 --> 00:10:42.030
Mechanical Turk which is a service
00:10:42.030 --> 00:10:44.645
where you can upload the data and
00:10:44.645 --> 00:10:47.960
annotation interface and then pay
00:10:47.960 --> 00:10:49.350
people to do the annotation.
00:10:49.970 --> 00:10:53.390
And often it's like pretty cheap.
00:10:53.390 --> 00:10:56.040
So we will talk about this more.
00:10:56.040 --> 00:10:59.160
This is a bit of a not this particular
00:10:59.160 --> 00:11:00.690
instance, but in general using
00:11:00.690 --> 00:11:02.350
Mechanical Turk and other like cheap
00:11:02.350 --> 00:11:03.990
labor services.
00:11:04.720 --> 00:11:07.260
Is like one of the commonly talked
00:11:07.260 --> 00:11:07.840
about like.
00:11:09.240 --> 00:11:11.930
Kind of questionable practices of AI,
00:11:11.930 --> 00:11:13.090
but we'll talk about that in another
00:11:13.090 --> 00:11:13.630
lecture.
00:11:24.980 --> 00:11:27.010
So the question is, what if there?
00:11:27.010 --> 00:11:29.145
What if most of it were bad data?
00:11:29.145 --> 00:11:31.330
I think they would just need to like
00:11:31.330 --> 00:11:32.970
download more images I guess.
00:11:38.160 --> 00:11:40.130
If it's not cleaned properly.
00:11:40.830 --> 00:11:44.012
So then you have what's called if some
00:11:44.012 --> 00:11:46.461
of the labels are, if some of the
00:11:46.461 --> 00:11:48.120
images are wrong that are assigned to a
00:11:48.120 --> 00:11:49.866
label, or some of the wrong labels are
00:11:49.866 --> 00:11:50.529
assigned to images.
00:11:52.300 --> 00:11:54.240
That's usually called label noise,
00:11:54.240 --> 00:11:56.170
where and.
00:11:58.210 --> 00:12:02.440
Some methods will be more harmed by
00:12:02.440 --> 00:12:04.980
that than other methods.
00:12:04.980 --> 00:12:07.700
Often if it's just like 1% of the data
00:12:07.700 --> 00:12:09.070
with a deep network, actually the
00:12:09.070 --> 00:12:11.670
impact won't be that bad, because
00:12:11.670 --> 00:12:12.440
you're not.
00:12:14.030 --> 00:12:16.889
Because it would just be kind of like
00:12:16.890 --> 00:12:17.310
a.
00:12:17.310 --> 00:12:19.560
You could think of it as an irreducible
00:12:19.560 --> 00:12:21.470
error of like 1%.
00:12:21.470 --> 00:12:23.540
And the network is cycling through tons
00:12:23.540 --> 00:12:25.240
of data, so it's not going to overly
00:12:25.240 --> 00:12:27.070
focused on those few examples.
00:12:27.820 --> 00:12:30.200
But for methods boosting that give more
00:12:30.200 --> 00:12:32.060
weight to misclassified examples, that
00:12:32.060 --> 00:12:34.970
can really damage those methods because
00:12:34.970 --> 00:12:37.400
they'll start to focus more on the
00:12:37.400 --> 00:12:38.840
incorrectly labeled examples.
00:12:40.190 --> 00:12:41.580
There's a there's a whole line of
00:12:41.580 --> 00:12:43.990
research or many methods proposed for
00:12:43.990 --> 00:12:45.930
how to like better deal with the label
00:12:45.930 --> 00:12:46.260
noise.
00:12:46.260 --> 00:12:47.210
For example, you can try to
00:12:47.210 --> 00:12:49.110
automatically infer whether something
00:12:49.110 --> 00:12:50.800
is from like a true label distribution
00:12:50.800 --> 00:12:52.980
or a noisy label distribution.
00:12:56.440 --> 00:12:58.970
So at the end they get they had 49,000
00:12:58.970 --> 00:13:02.010
workers from 167 different countries
00:13:02.010 --> 00:13:03.380
that contributed to the labeling.
00:13:06.620 --> 00:13:11.820
And then the task is you have to be you
00:13:11.820 --> 00:13:14.140
get to your classifier can make its top
00:13:14.140 --> 00:13:15.310
five predictions.
00:13:16.090 --> 00:13:18.480
And at least one of those predictions
00:13:18.480 --> 00:13:20.300
has to match the ground truth label.
00:13:21.310 --> 00:13:23.170
And the reason for that is that these
00:13:23.170 --> 00:13:24.970
images will often have like multiple
00:13:24.970 --> 00:13:26.810
categories depicted in them.
00:13:26.810 --> 00:13:29.350
So like this has this image has a
00:13:29.350 --> 00:13:30.560
T-shirt and.
00:13:31.320 --> 00:13:34.525
It has a drumstick and it has a steel
00:13:34.525 --> 00:13:36.310
drum, so it's reasonable that the
00:13:36.310 --> 00:13:38.100
classifier could predict those things
00:13:38.100 --> 00:13:38.730
as well.
00:13:38.730 --> 00:13:40.407
But it was supposed to be a picture of
00:13:40.407 --> 00:13:41.000
a steel drum.
00:13:41.780 --> 00:13:44.390
So if your output is scale, T-shirt,
00:13:44.390 --> 00:13:46.400
steel drum, drumstick, mud turtle, then
00:13:46.400 --> 00:13:48.050
it would be considered correct because
00:13:48.050 --> 00:13:50.030
if those are your top five scoring
00:13:50.030 --> 00:13:50.580
classes.
00:13:51.300 --> 00:13:52.900
Because one of them is the ground truth
00:13:52.900 --> 00:13:54.580
label, which is steel drum.
00:13:54.580 --> 00:13:56.530
But if you replace steel drum with
00:13:56.530 --> 00:13:57.866
giant panda, then you're out.
00:13:57.866 --> 00:13:59.250
Then you would be incorrect.
00:13:59.250 --> 00:14:00.970
Because you don't, your top five
00:14:00.970 --> 00:14:02.300
predictions don't include the ground
00:14:02.300 --> 00:14:02.490
truth.
00:14:03.220 --> 00:14:04.940
So a lot of times with Imagenet you'll
00:14:04.940 --> 00:14:07.250
see like top five error which is this
00:14:07.250 --> 00:14:10.215
measure and sometimes you see top one
00:14:10.215 --> 00:14:12.050
measure which is that it has to
00:14:12.050 --> 00:14:13.950
actually predict steel drum as the top
00:14:13.950 --> 00:14:14.320
one hit.
00:14:16.040 --> 00:14:17.450
And yeah, so.
00:14:18.670 --> 00:14:20.840
So it's not it's not possible to get
00:14:20.840 --> 00:14:22.625
perfect accuracy at the top one measure
00:14:22.625 --> 00:14:25.360
for sure, because this could have been
00:14:25.360 --> 00:14:27.610
like a return for T-shirt or something
00:14:27.610 --> 00:14:28.710
you don't really know.
00:14:31.990 --> 00:14:34.635
And then as I mentioned before, like
00:14:34.635 --> 00:14:37.060
the performance of deep networks, this
00:14:37.060 --> 00:14:41.230
is Alex net on Imagenet is what was the
00:14:41.230 --> 00:14:43.425
first like really compelling proof of
00:14:43.425 --> 00:14:45.490
the effectiveness of deep networks?
00:14:47.210 --> 00:14:48.950
Envision, but also in general.
00:14:50.600 --> 00:14:55.707
And the method that performs so in the
00:14:55.707 --> 00:14:57.890
2012 channel was this one, which I've
00:14:57.890 --> 00:14:59.030
already talked about in length.
00:14:59.030 --> 00:14:59.260
So.
00:15:00.210 --> 00:15:01.150
More of a reminder.
00:15:02.790 --> 00:15:04.480
Then I would say the next big
00:15:04.480 --> 00:15:05.230
breakthrough.
00:15:05.230 --> 00:15:06.300
There were a bunch of different
00:15:06.300 --> 00:15:08.470
networks architecture modifications
00:15:08.470 --> 00:15:09.450
that were proposed.
00:15:09.450 --> 00:15:13.750
VGG from Oxford and Googlenet and
00:15:13.750 --> 00:15:15.900
Inception Network and they all had
00:15:15.900 --> 00:15:17.700
their own kinds of innovations and ways
00:15:17.700 --> 00:15:19.320
of trying to make the network deeper.
00:15:19.320 --> 00:15:20.740
But I would say the next major
00:15:20.740 --> 00:15:23.070
breakthrough was Resnet.
00:15:23.070 --> 00:15:25.380
You still see Resnet models used quite
00:15:25.380 --> 00:15:26.760
frequently today.
00:15:28.310 --> 00:15:30.260
So and again in Resnet, the idea is
00:15:30.260 --> 00:15:34.400
that you simply add your if you pass
00:15:34.400 --> 00:15:37.440
the data through a bunch of MLPS or
00:15:37.440 --> 00:15:38.450
convolutional layers.
00:15:39.450 --> 00:15:41.370
But then every couple layers that you
00:15:41.370 --> 00:15:43.710
pass it through, you add back the
00:15:43.710 --> 00:15:45.620
previous value of the features to the
00:15:45.620 --> 00:15:46.050
output.
00:15:46.850 --> 00:15:48.320
And this thing is called a skip
00:15:48.320 --> 00:15:50.330
connection this like identity.
00:15:50.330 --> 00:15:53.690
So it's just you add to the output of a
00:15:53.690 --> 00:15:55.730
couple of layers that are processing X.
00:15:56.510 --> 00:15:58.985
And then this gradient is 1 S it allows
00:15:58.985 --> 00:16:00.940
the gradients to flow very effectively
00:16:00.940 --> 00:16:01.730
through the network.
00:16:03.330 --> 00:16:05.010
There's another variant of this that I
00:16:05.010 --> 00:16:07.210
didn't really have time to get to when
00:16:07.210 --> 00:16:08.580
I was last talking about resets.
00:16:09.440 --> 00:16:11.800
And that's the Resnet bottleneck
00:16:11.800 --> 00:16:12.530
module.
00:16:12.530 --> 00:16:14.170
So this is used for a much deeper
00:16:14.170 --> 00:16:14.740
networks.
00:16:15.720 --> 00:16:18.115
And the idea is that if you have you
00:16:18.115 --> 00:16:21.350
have some like high dimensional feature
00:16:21.350 --> 00:16:24.610
image that's being passed in, so you
00:16:24.610 --> 00:16:27.210
have like for each position it might be
00:16:27.210 --> 00:16:30.710
like a for example like a 14 by 14 grid
00:16:30.710 --> 00:16:33.820
of features and the features are 256
00:16:33.820 --> 00:16:34.720
dimensions deep.
00:16:36.290 --> 00:16:38.380
If you were to do convolution directly
00:16:38.380 --> 00:16:40.390
in that high dimensional feature space,
00:16:40.390 --> 00:16:41.700
then you would need a lot of weights
00:16:41.700 --> 00:16:42.890
and a lot of operations.
00:16:43.930 --> 00:16:45.910
And so the idea is that you first like
00:16:45.910 --> 00:16:47.450
project it down into a lower
00:16:47.450 --> 00:16:50.020
dimensional feature space by taking
00:16:50.020 --> 00:16:52.430
each feature cell by itself each
00:16:52.430 --> 00:16:55.260
position, and you map from 256
00:16:55.260 --> 00:16:57.948
dimensions down to 64 dimensions, so by
00:16:57.948 --> 00:16:59.070
a factor of 4.
00:16:59.850 --> 00:17:01.626
Then you apply the convolution, so then
00:17:01.626 --> 00:17:04.170
you have a filter that's operating over
00:17:04.170 --> 00:17:05.320
the.
00:17:05.700 --> 00:17:08.010
64 deep feature map.
00:17:09.050 --> 00:17:11.560
And then you and then you again like
00:17:11.560 --> 00:17:13.530
take each of those feature cells and
00:17:13.530 --> 00:17:16.480
map them back up to 256 dimensions.
00:17:16.480 --> 00:17:18.690
So this is called a bottleneck because
00:17:18.690 --> 00:17:20.580
you take the feature, the number of
00:17:20.580 --> 00:17:23.210
features, and at each position.
00:17:23.790 --> 00:17:26.303
And you it by a factor of four, and
00:17:26.303 --> 00:17:28.460
then you process it, and then you bring
00:17:28.460 --> 00:17:30.140
it back up to the original
00:17:30.140 --> 00:17:30.950
dimensionality.
00:17:32.730 --> 00:17:34.590
And the main reason for this is that it
00:17:34.590 --> 00:17:37.460
makes things much faster and reduces
00:17:37.460 --> 00:17:38.590
the number of parameters in your
00:17:38.590 --> 00:17:38.980
network.
00:17:39.590 --> 00:17:41.020
So if you were directly doing
00:17:41.020 --> 00:17:45.000
convolution on a 256 dimensional
00:17:45.000 --> 00:17:45.650
feature map.
00:17:46.730 --> 00:17:49.730
Then it would your filter size or the
00:17:49.730 --> 00:17:54.520
filter size would be 256 by three by
00:17:54.520 --> 00:17:55.220
three.
00:17:56.090 --> 00:17:57.970
And the number of operations that you
00:17:57.970 --> 00:17:59.370
would have to do at every position
00:17:59.370 --> 00:18:03.099
would be 256 by 256 by three by three.
00:18:04.010 --> 00:18:06.336
Where if you do this bottleneck, then
00:18:06.336 --> 00:18:08.575
you first have to you first like reduce
00:18:08.575 --> 00:18:10.490
the dimensionality of the features at
00:18:10.490 --> 00:18:11.190
each position.
00:18:11.190 --> 00:18:13.400
So for each position that's going to be
00:18:13.400 --> 00:18:15.620
256 by 64.
00:18:16.810 --> 00:18:19.970
And then you do convolution over the
00:18:19.970 --> 00:18:22.480
image, which for each position will be
00:18:22.480 --> 00:18:25.610
64 by 64 by three by three, because now
00:18:25.610 --> 00:18:27.592
it's only a 64 dimensional feature.
00:18:27.592 --> 00:18:30.050
And then you increase the
00:18:30.050 --> 00:18:32.120
dimensionality again by mapping it back
00:18:32.120 --> 00:18:33.250
up to 256.
00:18:34.000 --> 00:18:36.293
So that's going to be 64 by 256
00:18:36.293 --> 00:18:36.896
operations.
00:18:36.896 --> 00:18:39.730
So it's roughly like 1 ninth as many
00:18:39.730 --> 00:18:40.420
operations.
00:18:41.050 --> 00:18:42.620
And their experiments show that this
00:18:42.620 --> 00:18:45.870
performs very similarly to not doing
00:18:45.870 --> 00:18:48.170
the bottleneck, so it's kind of like a
00:18:48.170 --> 00:18:49.620
free efficiency gain.
00:18:52.170 --> 00:18:52.690
Question.
00:18:58.460 --> 00:18:59.455
Yeah, that's a good question.
00:18:59.455 --> 00:19:01.660
So this is, it's just an MLP, so you
00:19:01.660 --> 00:19:04.590
have like 256 dimensional vector coming
00:19:04.590 --> 00:19:04.800
in.
00:19:05.390 --> 00:19:08.820
And then you have 64 nodes in your
00:19:08.820 --> 00:19:09.200
layer.
00:19:09.870 --> 00:19:11.890
And then so then it just has 64
00:19:11.890 --> 00:19:12.340
outputs.
00:19:15.050 --> 00:19:16.880
And what's not like soup?
00:19:16.880 --> 00:19:21.340
What may not be super super obvious
00:19:21.340 --> 00:19:23.360
about this is you've got like a feature
00:19:23.360 --> 00:19:25.209
map, so you've got some grid of cells
00:19:25.210 --> 00:19:26.390
for each of those cells.
00:19:26.390 --> 00:19:28.470
It's like a vector that's 256
00:19:28.470 --> 00:19:29.780
dimensions long.
00:19:30.570 --> 00:19:32.780
And then you apply this MLP to each of
00:19:32.780 --> 00:19:35.770
those cells separately to map each
00:19:35.770 --> 00:19:39.150
position in your feature map down to 64
00:19:39.150 --> 00:19:39.620
dimensions.
00:19:44.630 --> 00:19:47.820
So all of this allows just one second.
00:19:47.820 --> 00:19:49.930
All of this allows Resnet to go super
00:19:49.930 --> 00:19:50.280
deep.
00:19:50.280 --> 00:19:54.340
So Alex Net was a winner of 2012, then
00:19:54.340 --> 00:19:56.539
VGG was the winner of 2014.
00:19:57.370 --> 00:19:59.520
With 19 layers and then Resnet was the
00:19:59.520 --> 00:20:02.235
winner of 2015 one year later with 152
00:20:02.235 --> 00:20:02.660
layers.
00:20:03.410 --> 00:20:07.792
And these skip connections allow the
00:20:07.792 --> 00:20:09.725
gradient to flow directly, essentially
00:20:09.725 --> 00:20:11.290
to any part of this network.
00:20:11.290 --> 00:20:13.092
Because these are all like gradient of
00:20:13.092 --> 00:20:15.510
1 S, the error gradient can flow to
00:20:15.510 --> 00:20:17.390
everything, and essentially you can
00:20:17.390 --> 00:20:18.620
optimize all these blocks
00:20:18.620 --> 00:20:19.560
simultaneously.
00:20:20.990 --> 00:20:23.280
They also makes the Deep Networks Act
00:20:23.280 --> 00:20:25.320
as a kind of ensemble, where like each
00:20:25.320 --> 00:20:27.995
of these little modules can make their
00:20:27.995 --> 00:20:30.840
own hypothesis or own scores that then
00:20:30.840 --> 00:20:31.763
get added together.
00:20:31.763 --> 00:20:34.420
And you see a kind of ensemble behavior
00:20:34.420 --> 00:20:36.220
in Resnet S and that their variants
00:20:36.220 --> 00:20:38.269
tends to actually decrease as you get
00:20:38.270 --> 00:20:40.020
deeper networks rather than increase,
00:20:40.020 --> 00:20:41.340
which is what you would expect with the
00:20:41.340 --> 00:20:42.420
increased number of parameters.
00:20:43.330 --> 00:20:44.010
Was there a question?
00:20:46.930 --> 00:20:47.210
OK.
00:21:05.550 --> 00:21:07.750
So after.
00:21:08.670 --> 00:21:11.350
So let's say that you're a 14 by 14
00:21:11.350 --> 00:21:14.750
spatially feature map that is 256
00:21:14.750 --> 00:21:18.030
dimensions deep, so 256 features at
00:21:18.030 --> 00:21:20.190
each position in this 14 by 14 map.
00:21:21.430 --> 00:21:23.700
So first this will convert it into a 14
00:21:23.700 --> 00:21:25.700
by 14 by 64.
00:21:26.930 --> 00:21:29.905
Feature map so 14 by 14 spatially, but
00:21:29.905 --> 00:21:31.410
64 dimensions long.
00:21:31.410 --> 00:21:32.770
The feet each feature vector at each
00:21:32.770 --> 00:21:33.230
position.
00:21:33.940 --> 00:21:35.370
Then you can do this three by three
00:21:35.370 --> 00:21:36.897
convolution, which means you have a
00:21:36.897 --> 00:21:38.250
three by three filter.
00:21:38.250 --> 00:21:40.440
That's like has 64 features that's
00:21:40.440 --> 00:21:41.910
operating over that map.
00:21:41.910 --> 00:21:45.860
That doesn't change the size of the
00:21:45.860 --> 00:21:48.165
representation at all, so the output of
00:21:48.165 --> 00:21:51.780
this will still be 14 by 14 by 64.
00:21:53.040 --> 00:21:54.890
Then you apply this at each feature
00:21:54.890 --> 00:21:58.640
cell and this will be a 256 node MLP
00:21:58.640 --> 00:22:00.700
that connects to the 64 features.
00:22:01.590 --> 00:22:05.040
And so this will map that into a 14 by
00:22:05.040 --> 00:22:06.540
14 by 256.
00:22:07.370 --> 00:22:10.089
And so you had a 14 by 14 by 256 that
00:22:10.090 --> 00:22:12.260
was fed in here and then that gets
00:22:12.260 --> 00:22:15.644
added back to the output which is 14 by
00:22:15.644 --> 00:22:16.569
14 by 256.
00:22:20.490 --> 00:22:22.280
There are three by three means that
00:22:22.280 --> 00:22:24.860
it's a convolutional filter, so it has
00:22:24.860 --> 00:22:26.665
three by three spatial extent.
00:22:26.665 --> 00:22:27.480
So the.
00:22:27.480 --> 00:22:31.584
So this will be like operate on it will
00:22:31.584 --> 00:22:31.826
be.
00:22:31.826 --> 00:22:33.120
It's a linear model.
00:22:33.760 --> 00:22:36.280
That operates on the features at each
00:22:36.280 --> 00:22:38.320
position and the features of its
00:22:38.320 --> 00:22:41.610
neighbors, the three the cells that are
00:22:41.610 --> 00:22:42.240
right around it.
00:22:43.670 --> 00:22:45.660
Where these ones, the one by ones, just
00:22:45.660 --> 00:22:47.560
operate at each position without
00:22:47.560 --> 00:22:48.380
considering the neighbors.
00:22:53.820 --> 00:22:55.095
So I'll show you some.
00:22:55.095 --> 00:22:56.700
I'll show some of the architecture
00:22:56.700 --> 00:22:58.390
examples later too, which might also
00:22:58.390 --> 00:22:59.350
help clarify.
00:22:59.350 --> 00:23:00.800
But did you have a question?
00:23:16.600 --> 00:23:18.140
So one clarification.
00:23:18.140 --> 00:23:21.370
Is that a Resnet, you train it with SGD
00:23:21.370 --> 00:23:23.490
still, so there's an optimization
00:23:23.490 --> 00:23:25.110
algorithm and there's an architecture?
00:23:25.850 --> 00:23:29.852
Their architecture defines like how the
00:23:29.852 --> 00:23:31.580
representation will change as you move
00:23:31.580 --> 00:23:33.120
through the network, and the
00:23:33.120 --> 00:23:35.170
optimization defines like how you're
00:23:35.170 --> 00:23:38.030
going to learn the weights that produce
00:23:38.030 --> 00:23:38.960
the representation.
00:23:39.640 --> 00:23:41.210
So the way that you would train a
00:23:41.210 --> 00:23:43.446
Resnet is the exact same as the way
00:23:43.446 --> 00:23:44.980
that you would train Alex net.
00:23:44.980 --> 00:23:47.500
It would still be using SGD like SGD
00:23:47.500 --> 00:23:49.420
with momentum or atom most likely.
00:23:50.990 --> 00:23:53.560
But the architecture is different and
00:23:53.560 --> 00:23:56.140
actually even though this network looks
00:23:56.140 --> 00:23:56.660
small.
00:23:57.500 --> 00:23:59.880
It's actually pretty heavy in the sense
00:23:59.880 --> 00:24:02.055
that it has a lot of weights because it
00:24:02.055 --> 00:24:03.280
has larger filters.
00:24:04.020 --> 00:24:05.230
And it has.
00:24:06.160 --> 00:24:09.776
It has like deeper like this big dense.
00:24:09.776 --> 00:24:11.840
This dense means that linear layer
00:24:11.840 --> 00:24:12.370
essentially.
00:24:12.370 --> 00:24:15.030
So you have this big like 22048 by
00:24:15.030 --> 00:24:15.850
2048.
00:24:17.380 --> 00:24:18.780
Weight matrix here.
00:24:19.440 --> 00:24:21.100
So Alex net actually has a lot of
00:24:21.100 --> 00:24:23.340
parameters and Resnet S are actually
00:24:23.340 --> 00:24:24.730
faster to train.
00:24:24.910 --> 00:24:25.430
00:24:26.030 --> 00:24:28.265
Then these other networks, especially
00:24:28.265 --> 00:24:32.400
then VDG because they're just like
00:24:32.400 --> 00:24:34.710
better suited to optimization.
00:24:35.290 --> 00:24:36.720
So they're still using the same
00:24:36.720 --> 00:24:38.690
optimization method, but the
00:24:38.690 --> 00:24:41.620
architecture makes those methods more
00:24:41.620 --> 00:24:42.710
effective optimizers.
00:24:46.830 --> 00:24:49.910
So there's just a few components in a
00:24:49.910 --> 00:24:52.213
resnet for CNN.
00:24:52.213 --> 00:24:54.470
CNN stands for convolutional neural
00:24:54.470 --> 00:24:56.150
network, so basically where you're
00:24:56.150 --> 00:24:58.890
operating over multiple positions in a
00:24:58.890 --> 00:24:59.170
grid.
00:25:00.620 --> 00:25:02.640
So you first you have these learned 2D
00:25:02.640 --> 00:25:04.680
convolutional features which are
00:25:04.680 --> 00:25:07.420
applying some linear weights to each
00:25:07.420 --> 00:25:08.570
position and its neighbors.
00:25:09.790 --> 00:25:12.770
And then a same size like feature map
00:25:12.770 --> 00:25:13.320
as an output.
00:25:14.660 --> 00:25:15.220
00:25:15.870 --> 00:25:17.900
Then you have what's called batch norm,
00:25:17.900 --> 00:25:19.020
which I'll talk about in the next
00:25:19.020 --> 00:25:19.400
slide.
00:25:20.110 --> 00:25:21.650
And then you have Velu, which we've
00:25:21.650 --> 00:25:23.135
talked about and then linear layers,
00:25:23.135 --> 00:25:26.990
which is MLP, just a perceptron layer.
00:25:30.040 --> 00:25:33.190
So batch normalization is used almost
00:25:33.190 --> 00:25:34.050
all the time now.
00:25:34.050 --> 00:25:36.334
It's a really common, commonly used.
00:25:36.334 --> 00:25:39.000
It's used for vision, but also for
00:25:39.000 --> 00:25:39.870
other applications.
00:25:41.060 --> 00:25:43.460
The main idea of batch norm is that.
00:25:44.140 --> 00:25:45.760
As you're training the network since
00:25:45.760 --> 00:25:47.180
all the weights are being updated.
00:25:47.930 --> 00:25:50.413
The kind of distribution of the
00:25:50.413 --> 00:25:52.060
features is going to keep changing.
00:25:52.060 --> 00:25:55.740
So the features may become like more.
00:25:55.740 --> 00:25:57.790
The mean of the features may change,
00:25:57.790 --> 00:25:59.860
their variance may change because all
00:25:59.860 --> 00:26:00.890
the weights are in flux.
00:26:01.560 --> 00:26:03.730
And so this makes it kind of unstable
00:26:03.730 --> 00:26:05.750
that like the later later layers keep
00:26:05.750 --> 00:26:07.500
on having to adapt to changes in the
00:26:07.500 --> 00:26:09.387
features of the earlier layers, and so
00:26:09.387 --> 00:26:10.740
like all the different layers are
00:26:10.740 --> 00:26:12.910
trying to like improve themselves but
00:26:12.910 --> 00:26:15.120
also react to the improvements of the
00:26:15.120 --> 00:26:16.280
other layers around them.
00:26:18.310 --> 00:26:20.385
The idea of batch norm is.
00:26:20.385 --> 00:26:23.700
So first you can kind of stabilize
00:26:23.700 --> 00:26:27.060
things by subtracting off the and
00:26:27.060 --> 00:26:29.170
dividing by the standard deviation of
00:26:29.170 --> 00:26:31.447
the features within each batch.
00:26:31.447 --> 00:26:33.970
So you could say like, I'm going to do
00:26:33.970 --> 00:26:34.120
this.
00:26:34.770 --> 00:26:37.600
Over the I'm going to subtract the mean
00:26:37.600 --> 00:26:39.190
and divide by the standard deviation of
00:26:39.190 --> 00:26:41.450
the features in the entire data set,
00:26:41.450 --> 00:26:42.870
but that would be really slow because
00:26:42.870 --> 00:26:44.380
you'd have to keep on reprocessing the
00:26:44.380 --> 00:26:46.030
whole data set to get these means in
00:26:46.030 --> 00:26:46.900
steering deviations.
00:26:47.530 --> 00:26:49.405
So instead they say for every batch.
00:26:49.405 --> 00:26:52.560
So you might be processing 120 examples
00:26:52.560 --> 00:26:53.050
at a time.
00:26:53.710 --> 00:26:55.210
You're going to compute the mean of the
00:26:55.210 --> 00:26:57.070
features of that batch, subtract it
00:26:57.070 --> 00:26:58.480
from the original value, compute the
00:26:58.480 --> 00:27:00.060
steering deviation or variance, and
00:27:00.060 --> 00:27:01.670
divide by the standard deviation.
00:27:03.320 --> 00:27:04.570
And then you get your normalized
00:27:04.570 --> 00:27:04.940
feature.
00:27:05.870 --> 00:27:08.325
And then you could say maybe this isn't
00:27:08.325 --> 00:27:09.960
the ideal thing to do.
00:27:09.960 --> 00:27:11.730
Maybe instead you should be using some
00:27:11.730 --> 00:27:13.850
other statistics to shift the features
00:27:13.850 --> 00:27:17.640
or to rescale them, maybe based on a
00:27:17.640 --> 00:27:22.080
longer history and so you have so first
00:27:22.080 --> 00:27:24.640
like you get some features X that are
00:27:24.640 --> 00:27:25.900
passed into the batch norm.
00:27:26.570 --> 00:27:28.715
It computes the mean, computes the
00:27:28.715 --> 00:27:30.410
variance, or equivalently, the square
00:27:30.410 --> 00:27:32.490
of this is steering deviation,
00:27:32.490 --> 00:27:35.020
subtracts the mean from the data,
00:27:35.020 --> 00:27:38.290
divides by the stern deviation, or like
00:27:38.290 --> 00:27:39.660
square root of variance plus some
00:27:39.660 --> 00:27:40.066
epsilon.
00:27:40.066 --> 00:27:41.900
This is just so you don't have a divide
00:27:41.900 --> 00:27:42.420
by zero.
00:27:43.630 --> 00:27:44.870
And.
00:27:45.670 --> 00:27:48.820
And then you get your zero mean unit
00:27:48.820 --> 00:27:50.740
STD normalized features.
00:27:50.740 --> 00:27:52.020
So that's a really common kind of
00:27:52.020 --> 00:27:52.950
normalization, right?
00:27:53.670 --> 00:27:56.420
And then the final output is just some
00:27:56.420 --> 00:28:00.390
gamma times the normalized X plus beta,
00:28:00.390 --> 00:28:00.860
so.
00:28:01.490 --> 00:28:04.520
This allows it to reset, and gamma and
00:28:04.520 --> 00:28:06.170
beta here are learned parameters.
00:28:06.170 --> 00:28:09.130
So this allows it to like adjust the
00:28:09.130 --> 00:28:12.460
shift and adjust the scaling if it's
00:28:12.460 --> 00:28:14.060
like learned to be effective.
00:28:14.060 --> 00:28:17.340
So if gamma is 1 and beta is 0, then
00:28:17.340 --> 00:28:20.820
this would just be a subtracting the
00:28:20.820 --> 00:28:22.290
mean and dividing by steering deviation
00:28:22.290 --> 00:28:23.180
of each batch.
00:28:23.180 --> 00:28:25.136
But it doesn't necessarily have to be
00:28:25.136 --> 00:28:25.309
0.
00:28:26.890 --> 00:28:30.136
And this is showing that how fast this
00:28:30.136 --> 00:28:31.870
is showing the accuracy of some
00:28:31.870 --> 00:28:32.400
training.
00:28:33.120 --> 00:28:36.560
With batch norm and without batch norm
00:28:36.560 --> 00:28:38.850
for a certain number of steps or like
00:28:38.850 --> 00:28:39.530
batches.
00:28:39.530 --> 00:28:41.830
And you can see that with batch norm it
00:28:41.830 --> 00:28:43.820
converges like incredibly faster,
00:28:43.820 --> 00:28:44.190
right?
00:28:44.190 --> 00:28:46.200
So they both get there eventually, but.
00:28:46.880 --> 00:28:49.820
But without batch norm, it takes like
00:28:49.820 --> 00:28:54.040
maybe 20 or 30,000 batches before it
00:28:54.040 --> 00:28:57.870
can start to catch up to what the width
00:28:57.870 --> 00:28:59.990
batch norm processed could do in a
00:28:59.990 --> 00:29:01.590
couple of 1000 iterations.
00:29:02.830 --> 00:29:04.200
And then this thing is showing.
00:29:05.180 --> 00:29:08.920
How the median and the 85th and 15th
00:29:08.920 --> 00:29:10.910
percentile of values are for some
00:29:10.910 --> 00:29:12.670
feature overtime?
00:29:13.290 --> 00:29:15.720
And without batch norm, it's kind of
00:29:15.720 --> 00:29:17.950
like unstable, like sometimes the mean
00:29:17.950 --> 00:29:20.380
shifts away from zero and you get
00:29:20.380 --> 00:29:22.430
increase or decrease the variance, but
00:29:22.430 --> 00:29:24.270
the batch norm results in it being much
00:29:24.270 --> 00:29:25.450
more stable.
00:29:25.450 --> 00:29:27.150
So it slowly increases the variance
00:29:27.150 --> 00:29:29.820
over time and the mean stays at roughly
00:29:29.820 --> 00:29:30.170
0.
00:29:36.620 --> 00:29:39.640
So this is in code what a res block
00:29:39.640 --> 00:29:40.260
looks like.
00:29:41.460 --> 00:29:45.620
So in these N torch you define like you
00:29:45.620 --> 00:29:47.210
always use specify.
00:29:48.050 --> 00:29:49.700
Your network out of these different
00:29:49.700 --> 00:29:51.910
components and then you say like how
00:29:51.910 --> 00:29:53.440
the data will pass through these
00:29:53.440 --> 00:29:53.830
components.
00:29:54.570 --> 00:29:57.485
So a rez block is like 1 section of
00:29:57.485 --> 00:30:00.160
that neural network that has one skip
00:30:00.160 --> 00:30:00.810
connection.
00:30:01.670 --> 00:30:03.070
So first I would start with the
00:30:03.070 --> 00:30:03.720
forward.
00:30:03.720 --> 00:30:05.770
So this says how the data will be
00:30:05.770 --> 00:30:07.040
passed through the network.
00:30:07.040 --> 00:30:09.404
So you compute some shortcut, could
00:30:09.404 --> 00:30:11.180
just be the input.
00:30:11.180 --> 00:30:12.680
I'll get to some detail about that
00:30:12.680 --> 00:30:14.270
later, but let's for now just say you
00:30:14.270 --> 00:30:14.930
have the input.
00:30:16.060 --> 00:30:17.690
Then you pass the input through a
00:30:17.690 --> 00:30:19.710
convolutional layer, through batch
00:30:19.710 --> 00:30:21.270
norm, and through Relu.
00:30:22.470 --> 00:30:25.440
Then you pass that through another
00:30:25.440 --> 00:30:27.930
convolutional layer, through batch norm
00:30:27.930 --> 00:30:28.860
and through railu.
00:30:29.940 --> 00:30:33.800
And then you add the input back to the
00:30:33.800 --> 00:30:36.920
output, and that's your final output
00:30:36.920 --> 00:30:37.800
through summary loop.
00:30:38.760 --> 00:30:40.515
So it's pretty simple.
00:30:40.515 --> 00:30:43.665
It's a convolutional batch norm Relu,
00:30:43.665 --> 00:30:46.080
convolutional batch Norm Relu, add back
00:30:46.080 --> 00:30:48.630
the input and apply one more value.
00:30:49.760 --> 00:30:52.380
And then this is defining the details
00:30:52.380 --> 00:30:54.180
of what these like convolutional layers
00:30:54.180 --> 00:30:54.460
are.
00:30:55.580 --> 00:30:58.280
So first like you can the rez block can
00:30:58.280 --> 00:30:59.590
downsample or not.
00:30:59.590 --> 00:31:01.160
So if it down samples, it means that
00:31:01.160 --> 00:31:04.278
you would go from a 14 by 14 feature
00:31:04.278 --> 00:31:06.949
map into a 7 by 7 feature map.
00:31:07.570 --> 00:31:09.630
So typically in these networks, as I'll
00:31:09.630 --> 00:31:12.990
show in a later slide, you tend to
00:31:12.990 --> 00:31:14.870
start with like a very big feature map.
00:31:14.870 --> 00:31:17.640
It's like image size and then you make
00:31:17.640 --> 00:31:19.846
it smaller and smaller spatially, but
00:31:19.846 --> 00:31:21.270
deeper and deeper in terms of the
00:31:21.270 --> 00:31:21.790
features.
00:31:22.440 --> 00:31:24.910
So that instead of representing like
00:31:24.910 --> 00:31:27.400
very weak features at each pixel
00:31:27.400 --> 00:31:29.808
initially you have an RGB value at each
00:31:29.808 --> 00:31:30.134
pixel.
00:31:30.134 --> 00:31:31.650
You start representing really
00:31:31.650 --> 00:31:33.830
complicated features, but with less
00:31:33.830 --> 00:31:35.350
like spatial definition.
00:31:38.620 --> 00:31:41.870
So if you downsample then instead.
00:31:41.870 --> 00:31:43.800
So let me start with not downsample.
00:31:43.800 --> 00:31:45.726
So first if you don't downsample then
00:31:45.726 --> 00:31:48.020
you apply you define the convolution as
00:31:48.020 --> 00:31:49.553
the number of North channels to the
00:31:49.553 --> 00:31:50.510
number of out channels.
00:31:51.340 --> 00:31:54.000
And it's a three by three filter stride
00:31:54.000 --> 00:31:55.800
is 1 means that it operates over every
00:31:55.800 --> 00:31:56.320
position.
00:31:57.320 --> 00:31:59.270
And padding is one you like.
00:31:59.270 --> 00:32:01.040
Create some fake values around the
00:32:01.040 --> 00:32:03.020
outside of the feature map so that you
00:32:03.020 --> 00:32:05.905
can compute the filter on the border of
00:32:05.905 --> 00:32:06.460
the future map.
00:32:09.090 --> 00:32:09.820
The.
00:32:09.970 --> 00:32:10.630
00:32:12.070 --> 00:32:14.120
And then it's yeah, there's nothing.
00:32:14.120 --> 00:32:15.820
It's just saying that's what BN one is,
00:32:15.820 --> 00:32:18.599
a batch norm and then.
00:32:20.020 --> 00:32:21.810
Comp two is the same thing basically,
00:32:21.810 --> 00:32:23.360
except now the out channel, you're
00:32:23.360 --> 00:32:24.995
going from out channels to out
00:32:24.995 --> 00:32:25.280
channels.
00:32:25.280 --> 00:32:28.359
So this could go from like 64 to 128
00:32:28.360 --> 00:32:29.085
for example.
00:32:29.085 --> 00:32:31.480
And then this convolution you'd have
00:32:31.480 --> 00:32:34.850
like 128 filters that are operating on
00:32:34.850 --> 00:32:36.580
a 64 deep.
00:32:37.560 --> 00:32:38.180
Feature map.
00:32:38.820 --> 00:32:42.010
And then here you'd have 128 filters
00:32:42.010 --> 00:32:44.470
operating on a 128 deep feature map.
00:32:46.830 --> 00:32:49.250
Then if you do the downsampling, you
00:32:49.250 --> 00:32:51.170
instead of that first convolution.
00:32:51.170 --> 00:32:53.000
That first convolutional layer is what
00:32:53.000 --> 00:32:54.140
does the down sampling.
00:32:55.100 --> 00:32:56.690
And the way that it does it.
00:32:56.840 --> 00:32:57.430
00:32:58.220 --> 00:33:00.130
Is that it does.
00:33:00.130 --> 00:33:02.240
It has a stride of two, so instead of
00:33:02.240 --> 00:33:03.660
operating at every position, you
00:33:03.660 --> 00:33:05.518
operate on every other position.
00:33:05.518 --> 00:33:07.380
And then since you have an output at
00:33:07.380 --> 00:33:09.280
every other position, it means that
00:33:09.280 --> 00:33:11.250
you'll only have half as many outputs
00:33:11.250 --> 00:33:12.970
along each dimension, and so that will
00:33:12.970 --> 00:33:14.970
make the feature map smaller spatially.
00:33:20.700 --> 00:33:22.090
And so here's a whole.
00:33:22.090 --> 00:33:23.970
Here's the rest of the code for the
00:33:23.970 --> 00:33:25.060
Resnet architecture.
00:33:25.990 --> 00:33:28.225
So first the forward is pretty simple.
00:33:28.225 --> 00:33:31.690
It goes layer 01234.
00:33:31.690 --> 00:33:33.000
This is average pooling.
00:33:33.000 --> 00:33:35.085
So at some point you take all the
00:33:35.085 --> 00:33:36.856
features that are in some little map
00:33:36.856 --> 00:33:38.890
and you just string them all up into a
00:33:38.890 --> 00:33:39.930
big vector.
00:33:39.930 --> 00:33:41.880
Or sorry actually no I said it wrong.
00:33:41.880 --> 00:33:43.596
You take all the features that are into
00:33:43.596 --> 00:33:45.310
a map and then you take the average
00:33:45.310 --> 00:33:46.480
across all the cells.
00:33:46.480 --> 00:33:48.870
So if you had like a three by three map
00:33:48.870 --> 00:33:49.560
of features.
00:33:50.430 --> 00:33:52.220
Then for each feature value you would
00:33:52.220 --> 00:33:53.890
take the average of those nine
00:33:53.890 --> 00:33:56.120
elements, like the 9 spatial positions.
00:33:56.770 --> 00:34:00.460
And that's the average pooling.
00:34:01.720 --> 00:34:03.880
And then the flatten is where you then
00:34:03.880 --> 00:34:06.060
take that you just make it into a big
00:34:06.060 --> 00:34:07.010
long feature vector.
00:34:08.240 --> 00:34:10.315
And then you have your final linear
00:34:10.315 --> 00:34:11.890
layer, so the FC layer.
00:34:13.170 --> 00:34:14.480
And then if you look at the details,
00:34:14.480 --> 00:34:17.020
it's just basically each layer is
00:34:17.020 --> 00:34:18.000
simply.
00:34:18.000 --> 00:34:20.490
The first layer is special, you do a
00:34:20.490 --> 00:34:22.430
convolution with the bigger size
00:34:22.430 --> 00:34:22.800
filter.
00:34:24.180 --> 00:34:26.535
And you also do some Max pooling, so
00:34:26.535 --> 00:34:28.610
you kind of quickly get it into a
00:34:28.610 --> 00:34:30.560
deeper level of a deeper number of
00:34:30.560 --> 00:34:32.280
features at a smaller spatial
00:34:32.280 --> 00:34:32.800
dimension.
00:34:33.760 --> 00:34:35.170
And then in the subsequent layers,
00:34:35.170 --> 00:34:37.740
they're all very similar, you just you
00:34:37.740 --> 00:34:39.210
have two res blocks.
00:34:40.230 --> 00:34:41.780
Two rez blocks, then you have two res
00:34:41.780 --> 00:34:43.762
blocks where you're down sampling and
00:34:43.762 --> 00:34:45.420
increasing and increasing the depth,
00:34:45.420 --> 00:34:47.310
increasing the number of features.
00:34:47.310 --> 00:34:49.126
Downsample increase the number of
00:34:49.126 --> 00:34:50.428
features, down sample increase the
00:34:50.428 --> 00:34:51.160
number of features.
00:34:51.890 --> 00:34:54.250
Average in your final linear layer.
00:34:59.540 --> 00:35:01.840
And then this is some examples of
00:35:01.840 --> 00:35:03.600
different different depths.
00:35:04.780 --> 00:35:08.180
So mainly they just differ in how many
00:35:08.180 --> 00:35:10.650
res blocks you apply and for the larger
00:35:10.650 --> 00:35:12.230
ones, they're doing the bottleneck
00:35:12.230 --> 00:35:15.239
Resnet module instead of like the
00:35:15.240 --> 00:35:17.520
simpler RESNET module that we showed
00:35:17.520 --> 00:35:18.080
in.
00:35:18.080 --> 00:35:19.690
So this is the one that I was showing
00:35:19.690 --> 00:35:20.680
on the earlier slide.
00:35:21.440 --> 00:35:22.820
And this is the code.
00:35:22.820 --> 00:35:24.270
I was showing the code for this one
00:35:24.270 --> 00:35:25.570
which is just a little bit simpler.
00:35:27.550 --> 00:35:28.975
So you start with a.
00:35:28.975 --> 00:35:32.145
So the input to this for Imagenet is a
00:35:32.145 --> 00:35:34.180
224 by 224 image.
00:35:34.180 --> 00:35:35.960
That's RGB.
00:35:37.050 --> 00:35:39.550
And then since the first comp layer is
00:35:39.550 --> 00:35:41.620
doing a stride of two, it ends up being
00:35:41.620 --> 00:35:45.600
a 112 by 112 like image that then has
00:35:45.600 --> 00:35:47.860
64 features per position.
00:35:49.240 --> 00:35:51.890
And then you do another Max pooling
00:35:51.890 --> 00:35:53.680
which is taking the Max value out of
00:35:53.680 --> 00:35:55.490
each two by two chunk of the image.
00:35:55.490 --> 00:35:58.475
So that further reduces the size to 56
00:35:58.475 --> 00:35:59.260
by 56.
00:36:00.600 --> 00:36:05.130
Then you these resnet blocks to it, so
00:36:05.130 --> 00:36:08.999
then the output is still 56 is still 56
00:36:09.000 --> 00:36:09.840
by 56.
00:36:10.620 --> 00:36:12.260
And then you apply Resnet blocks that
00:36:12.260 --> 00:36:14.860
will downsample it by a factor of two.
00:36:14.860 --> 00:36:18.079
So now you've got a 22028 by 28 by.
00:36:18.080 --> 00:36:20.210
If you're doing like Resnet 34, it'd be
00:36:20.210 --> 00:36:21.520
128 dimensional features.
00:36:22.450 --> 00:36:24.315
And then again you downsample, produce
00:36:24.315 --> 00:36:26.965
more features, down sample produce more
00:36:26.965 --> 00:36:27.360
features.
00:36:28.010 --> 00:36:32.570
Average pool and then you finally like
00:36:32.570 --> 00:36:34.740
turn this into like a 512 dimensional
00:36:34.740 --> 00:36:35.440
feature vector.
00:36:35.440 --> 00:36:38.070
If you're doing Resnet 34 or if you
00:36:38.070 --> 00:36:41.021
were doing Resnet 50 to 152, you'd have
00:36:41.021 --> 00:36:42.879
a 2048 dimensional feature vector.
00:36:45.450 --> 00:36:45.950
00:36:50.330 --> 00:36:52.575
So this is multiple blocks.
00:36:52.575 --> 00:36:55.250
So if you see here like each layer for
00:36:55.250 --> 00:36:57.320
Resnet 18 add 2 rez blocks in a row.
00:36:58.330 --> 00:37:01.410
So this is for 18 we have like times 2.
00:37:02.110 --> 00:37:03.940
And if you go deeper, then you just
00:37:03.940 --> 00:37:06.090
have more rest blocks in a row at the
00:37:06.090 --> 00:37:06.890
same size.
00:37:09.000 --> 00:37:11.360
So each of these layers is transforming
00:37:11.360 --> 00:37:12.720
the feature, is trying to extract
00:37:12.720 --> 00:37:15.640
useful information and also like kind
00:37:15.640 --> 00:37:17.700
of deepening, deepening the features
00:37:17.700 --> 00:37:19.190
and looking broader in the image.
00:37:20.940 --> 00:37:22.570
And then this is showing the number,
00:37:22.570 --> 00:37:25.195
the amount of computation roughly that
00:37:25.195 --> 00:37:26.870
is used for each of these.
00:37:26.870 --> 00:37:28.480
And one thing to note is that when you
00:37:28.480 --> 00:37:31.250
go from 34 to 50, that's when they
00:37:31.250 --> 00:37:32.960
start using the bottleneck layer, so
00:37:32.960 --> 00:37:34.290
you there's almost no change in
00:37:34.290 --> 00:37:36.830
computation, even though the 50 is much
00:37:36.830 --> 00:37:39.410
deeper and as many more has many more
00:37:39.410 --> 00:37:39.800
layers.
00:37:41.610 --> 00:37:45.330
And then finally remember that the like
00:37:45.330 --> 00:37:47.810
breakthrough result from Alex NET was
00:37:47.810 --> 00:37:50.800
15% error roughly for top five
00:37:50.800 --> 00:37:51.800
prediction.
00:37:52.750 --> 00:37:56.500
And Resnet 152 gets 4.5% error for top
00:37:56.500 --> 00:37:57.210
five prediction.
00:37:57.980 --> 00:38:00.762
So that's a factor that's more than a
00:38:00.762 --> 00:38:02.420
factor of 3 reduction of error, which
00:38:02.420 --> 00:38:03.410
is really huge.
00:38:06.830 --> 00:38:09.980
And nothing gets like that much
00:38:09.980 --> 00:38:11.120
remarkably better.
00:38:11.120 --> 00:38:11.760
There's no.
00:38:11.760 --> 00:38:13.740
I don't remember what the best is, but
00:38:13.740 --> 00:38:15.070
it's not like that much better than.
00:38:20.020 --> 00:38:20.350
All right.
00:38:21.090 --> 00:38:23.190
So that's resnet.
00:38:23.190 --> 00:38:23.930
One more.
00:38:23.930 --> 00:38:25.940
This is sort of like a sidebar note I
00:38:25.940 --> 00:38:28.220
applies to all the vision methods as
00:38:28.220 --> 00:38:28.480
well.
00:38:29.160 --> 00:38:30.810
A really common trick in computer
00:38:30.810 --> 00:38:32.600
vision is that you do what's called
00:38:32.600 --> 00:38:33.660
data augmentation.
00:38:34.630 --> 00:38:35.650
Which is that like.
00:38:36.310 --> 00:38:39.110
Each image show the example here.
00:38:39.110 --> 00:38:40.300
So each image.
00:38:41.800 --> 00:38:44.320
You can like change the image in small
00:38:44.320 --> 00:38:46.330
ways and it will create different
00:38:46.330 --> 00:38:47.980
features, but we would say like you
00:38:47.980 --> 00:38:50.130
should interpret all of these images to
00:38:50.130 --> 00:38:50.780
be the same.
00:38:51.510 --> 00:38:53.530
So you might have like a photo like
00:38:53.530 --> 00:38:55.450
this and you can like modify the
00:38:55.450 --> 00:38:57.379
coloring or you can.
00:38:58.740 --> 00:39:01.340
You can apply like some specialized
00:39:01.340 --> 00:39:03.300
filters to it or you can crop it or
00:39:03.300 --> 00:39:05.860
shift it or rotate it and we would
00:39:05.860 --> 00:39:07.684
typically say like these all have the
00:39:07.684 --> 00:39:08.980
same content, all these different
00:39:08.980 --> 00:39:10.180
images have the same content.
00:39:10.860 --> 00:39:12.550
But they're gonna produce slightly
00:39:12.550 --> 00:39:14.040
different features because something
00:39:14.040 --> 00:39:14.640
was done to them.
00:39:15.620 --> 00:39:17.330
And since you're cycling through the
00:39:17.330 --> 00:39:17.720
data.
00:39:18.370 --> 00:39:23.070
Many times, sometimes 100 or 300 times.
00:39:23.070 --> 00:39:24.430
Then it kind of makes sense to create
00:39:24.430 --> 00:39:26.140
little variations of the data rather
00:39:26.140 --> 00:39:28.340
than processing the exact same data
00:39:28.340 --> 00:39:29.550
every time you pass through it.
00:39:30.670 --> 00:39:34.485
And so the idea of data augmentation is
00:39:34.485 --> 00:39:36.495
that you create more variety to your
00:39:36.495 --> 00:39:38.370
data and that kind of like creates
00:39:38.370 --> 00:39:40.600
virtual training examples that can
00:39:40.600 --> 00:39:43.570
further improve the robustness of the
00:39:43.570 --> 00:39:43.940
model.
00:39:45.030 --> 00:39:47.700
So this idea goes back to Palmer low in
00:39:47.700 --> 00:39:50.110
1995, who used neural networks to drive
00:39:50.110 --> 00:39:50.590
a car.
00:39:52.130 --> 00:39:54.740
But it's it was like picked up again
00:39:54.740 --> 00:39:57.460
more broadly when deep networks became
00:39:57.460 --> 00:39:57.860
popular.
00:40:00.730 --> 00:40:01.050
Yeah.
00:40:02.660 --> 00:40:04.286
And then to do data augmentation, you
00:40:04.286 --> 00:40:06.890
do it like this, where you define like
00:40:06.890 --> 00:40:07.330
transforms.
00:40:07.330 --> 00:40:09.313
If you're doing it in π torch, you
00:40:09.313 --> 00:40:11.210
define like the set of transforms that
00:40:11.210 --> 00:40:13.547
apply, and there'll be some like
00:40:13.547 --> 00:40:15.699
randomly mirror, randomly apply some
00:40:15.700 --> 00:40:18.380
rotation within this range, randomly
00:40:18.380 --> 00:40:22.380
resize within some range, randomly
00:40:22.380 --> 00:40:22.740
crop.
00:40:23.750 --> 00:40:25.940
And then?
00:40:27.150 --> 00:40:29.210
And so then like every time the data is
00:40:29.210 --> 00:40:31.080
loaded, then the data loader will like
00:40:31.080 --> 00:40:33.410
apply these transformations so that
00:40:33.410 --> 00:40:35.580
your data gets like modified in these
00:40:35.580 --> 00:40:36.270
various ways.
00:40:37.930 --> 00:40:39.600
That you apply the transform and then
00:40:39.600 --> 00:40:41.320
you input the transform into the data
00:40:41.320 --> 00:40:41.670
loader.
00:40:48.210 --> 00:40:48.830
So.
00:40:50.610 --> 00:40:53.920
So far I've talked about one data set,
00:40:53.920 --> 00:40:54.500
Imagenet.
00:40:55.120 --> 00:40:57.630
And one and some different
00:40:57.630 --> 00:40:58.230
architectures.
00:40:59.570 --> 00:41:03.670
But nobody, Imagenet, is not itself
00:41:03.670 --> 00:41:05.550
like a very practical application,
00:41:05.550 --> 00:41:05.780
right?
00:41:05.780 --> 00:41:08.450
Like nobody wants to classify images
00:41:08.450 --> 00:41:09.670
into those thousand categories.
00:41:11.300 --> 00:41:13.920
And the even after the successive
00:41:13.920 --> 00:41:15.530
Imagenet, it wasn't clear like what
00:41:15.530 --> 00:41:17.110
will be the impact on computer vision,
00:41:17.110 --> 00:41:18.960
because most of our data sets are not
00:41:18.960 --> 00:41:20.355
nearly that big.
00:41:20.355 --> 00:41:22.330
It took a lot of work to create
00:41:22.330 --> 00:41:22.770
Imagenet.
00:41:22.770 --> 00:41:24.420
And if you're just trying to.
00:41:25.450 --> 00:41:27.200
Do some kind of application for your
00:41:27.200 --> 00:41:29.330
company or for personal project or
00:41:29.330 --> 00:41:30.000
whatever.
00:41:30.000 --> 00:41:33.330
Chances are it's like very expensive to
00:41:33.330 --> 00:41:34.670
get that amount of data and you might
00:41:34.670 --> 00:41:36.880
not have that many images available so.
00:41:37.650 --> 00:41:41.168
Is this useful for smaller, for smaller
00:41:41.168 --> 00:41:42.170
data sets?
00:41:42.170 --> 00:41:43.830
Or problems were not so much data is
00:41:43.830 --> 00:41:44.350
available?
00:41:45.200 --> 00:41:47.580
And so that brings us to the problem of
00:41:47.580 --> 00:41:49.410
like how we can take a model that's
00:41:49.410 --> 00:41:52.200
trained for one data set, Imagenet, and
00:41:52.200 --> 00:41:54.759
then apply it to some other data set.
00:41:55.670 --> 00:41:57.750
You can think about these when you're
00:41:57.750 --> 00:42:01.000
training a deep network to do Imagenet.
00:42:01.000 --> 00:42:03.030
It's not only learning to classify
00:42:03.030 --> 00:42:05.380
images into these Imagenet labels, but
00:42:05.380 --> 00:42:07.770
it's also learning a representation of
00:42:07.770 --> 00:42:09.794
images, and it's that representation
00:42:09.794 --> 00:42:11.300
that can be reused.
00:42:12.250 --> 00:42:13.560
And the reason that the Imagenet
00:42:13.560 --> 00:42:16.336
representation is like pretty effective
00:42:16.336 --> 00:42:18.676
is that there's so many different
00:42:18.676 --> 00:42:21.374
classes and so many different, so many
00:42:21.374 --> 00:42:22.320
different images.
00:42:22.320 --> 00:42:24.420
And so a representation that can
00:42:24.420 --> 00:42:26.410
distinguish between these thousand
00:42:26.410 --> 00:42:29.720
classes also probably encodes like most
00:42:29.720 --> 00:42:31.060
of the information that you would need
00:42:31.060 --> 00:42:32.460
to do many other vision tests.
00:42:37.550 --> 00:42:40.540
So we start with this Imagenet, what
00:42:40.540 --> 00:42:42.300
you would call a pre trained model.
00:42:42.300 --> 00:42:43.840
So it was like trained on Imagenet.
00:42:44.600 --> 00:42:46.680
And we can think of it as having two
00:42:46.680 --> 00:42:47.270
components.
00:42:47.270 --> 00:42:49.660
There's the encoder that's producing a
00:42:49.660 --> 00:42:51.780
good representation of the image, and
00:42:51.780 --> 00:42:54.720
then a decoder linear layer that is
00:42:54.720 --> 00:42:56.875
mapping from that encoded image
00:42:56.875 --> 00:42:59.700
representation into some class logics.
00:43:00.400 --> 00:43:01.340
Or probabilities?
00:43:05.320 --> 00:43:09.555
SO11 common solution to this is what's
00:43:09.555 --> 00:43:11.630
called sometimes called a linear probe
00:43:11.630 --> 00:43:13.410
now or feature extraction.
00:43:14.410 --> 00:43:17.880
So basically you just you essentially
00:43:17.880 --> 00:43:18.700
just compute.
00:43:18.700 --> 00:43:20.340
You don't change any of the weights in
00:43:20.340 --> 00:43:22.090
all these like convolutional layers.
00:43:23.230 --> 00:43:25.250
You throw out the decoder, so your
00:43:25.250 --> 00:43:26.520
final linear prediction.
00:43:26.520 --> 00:43:27.870
You get rid of it because you want to
00:43:27.870 --> 00:43:29.420
classify something different this time.
00:43:30.170 --> 00:43:32.030
Any replace that final linear
00:43:32.030 --> 00:43:35.540
prediction with a new linear layer
00:43:35.540 --> 00:43:36.900
that's going to predict the classes
00:43:36.900 --> 00:43:38.080
that you actually care about.
00:43:39.040 --> 00:43:41.090
And then, without changing the encoder
00:43:41.090 --> 00:43:45.560
at all, you then extract the same kind
00:43:45.560 --> 00:43:46.980
of features from your new training
00:43:46.980 --> 00:43:49.020
examples, and now you predict the new
00:43:49.020 --> 00:43:52.030
labels that you care about and tune
00:43:52.030 --> 00:43:54.525
your linear model, your final decoder,
00:43:54.525 --> 00:43:56.770
to make those predictions well.
00:43:59.950 --> 00:44:02.550
So there's like two ways to do this.
00:44:02.760 --> 00:44:03.310
00:44:04.320 --> 00:44:08.460
One way is the feature method where you
00:44:08.460 --> 00:44:10.690
basically you just extract features
00:44:10.690 --> 00:44:12.120
using the pre trained model.
00:44:13.030 --> 00:44:14.952
And you save those features and then
00:44:14.952 --> 00:44:17.960
you can just train an SVM or a linear
00:44:17.960 --> 00:44:20.470
logistic regression classifier.
00:44:20.470 --> 00:44:24.140
So this was the way that this is
00:44:24.140 --> 00:44:27.330
actually a very fast way and what the
00:44:27.330 --> 00:44:28.953
very earliest papers would extract
00:44:28.953 --> 00:44:30.660
features this way and then apply an
00:44:30.660 --> 00:44:30.970
SVM.
00:44:33.240 --> 00:44:36.010
So you load pre trained model.
00:44:36.010 --> 00:44:36.970
There's many.
00:44:36.970 --> 00:44:38.946
If you go to this link there's tons of
00:44:38.946 --> 00:44:40.130
pre trained models available.
00:44:41.450 --> 00:44:43.466
You remove the final prediction layer,
00:44:43.466 --> 00:44:45.120
the final linear classifier.
00:44:46.130 --> 00:44:48.360
You apply the model to each of your new
00:44:48.360 --> 00:44:50.310
training images to get their features,
00:44:50.310 --> 00:44:51.340
and then you save them.
00:44:52.150 --> 00:44:54.345
And so you have just a linear you have
00:44:54.345 --> 00:44:56.335
like a data set where you have X your
00:44:56.335 --> 00:44:57.570
features from the deep network.
00:44:57.570 --> 00:44:59.350
It will be for example like 512
00:44:59.350 --> 00:45:00.990
dimensional features for each image.
00:45:01.940 --> 00:45:05.460
And your labels that you've annotated
00:45:05.460 --> 00:45:07.790
for your new classification problem.
00:45:08.490 --> 00:45:09.925
And then you just train a new linear
00:45:09.925 --> 00:45:11.410
model and you can use whatever you
00:45:11.410 --> 00:45:11.560
want.
00:45:11.560 --> 00:45:12.813
It doesn't even have to be a linear
00:45:12.813 --> 00:45:14.340
model, but usually that's what people
00:45:14.340 --> 00:45:14.680
would do.
00:45:16.690 --> 00:45:17.960
So here's the code for that.
00:45:17.960 --> 00:45:19.770
It's like pretty trivial.
00:45:21.060 --> 00:45:21.730
Very short.
00:45:22.380 --> 00:45:25.210
So you get let's say you want Alex net,
00:45:25.210 --> 00:45:29.900
you just like, import these things in
00:45:29.900 --> 00:45:31.080
your notebook or whatever.
00:45:31.880 --> 00:45:34.022
You get the Alex net model, set it to
00:45:34.022 --> 00:45:34.660
be pre trained.
00:45:34.660 --> 00:45:35.670
So this will be pre trained on
00:45:35.670 --> 00:45:36.090
Imagenet.
00:45:36.750 --> 00:45:38.920
This pre trained equals true will work,
00:45:38.920 --> 00:45:40.290
but it's deprecated.
00:45:40.290 --> 00:45:41.780
It will get you the Imagenet model but
00:45:41.780 --> 00:45:43.090
there's actually like you can get
00:45:43.090 --> 00:45:44.410
models that are pre trained on other
00:45:44.410 --> 00:45:45.090
things as well.
00:45:46.820 --> 00:45:49.770
And then this is just a very compact
00:45:49.770 --> 00:45:52.100
way of chopping off the last layer, so
00:45:52.100 --> 00:45:54.410
it's like keeping all the layers up to
00:45:54.410 --> 00:45:55.180
the last one.
00:45:56.580 --> 00:45:58.490
And then you just.
00:45:58.490 --> 00:46:00.800
So this is like doing steps one and
00:46:00.800 --> 00:46:01.130
two.
00:46:01.760 --> 00:46:03.280
And then you would just loop through
00:46:03.280 --> 00:46:06.805
your new images, use this, apply this
00:46:06.805 --> 00:46:09.682
new model to your images to get the
00:46:09.682 --> 00:46:11.360
features and then save those features.
00:46:13.640 --> 00:46:17.350
The other method that you can do is
00:46:17.350 --> 00:46:19.520
that you just like freeze your encoder
00:46:19.520 --> 00:46:22.260
so that term freeze or frozen weights.
00:46:23.460 --> 00:46:25.465
Is means that you don't allow the
00:46:25.465 --> 00:46:26.500
weights to change.
00:46:26.500 --> 00:46:29.095
So you process examples using those
00:46:29.095 --> 00:46:30.960
weights, but you don't update the
00:46:30.960 --> 00:46:31.840
weights during training.
00:46:32.960 --> 00:46:34.830
So again you load pre trained model.
00:46:35.710 --> 00:46:38.390
You set the network to not update the
00:46:38.390 --> 00:46:39.280
encoder weights.
00:46:39.280 --> 00:46:41.360
You replace the last layer just like
00:46:41.360 --> 00:46:42.980
before with your new linear layer.
00:46:43.630 --> 00:46:45.220
And then you train the network with the
00:46:45.220 --> 00:46:45.810
new data set.
00:46:46.990 --> 00:46:49.284
And this is a little bit slower or a
00:46:49.284 --> 00:46:50.810
bit slower than the method on the left,
00:46:50.810 --> 00:46:52.280
because every time you process a
00:46:52.280 --> 00:46:53.500
training sample you have to run it
00:46:53.500 --> 00:46:54.320
through the whole network.
00:46:55.000 --> 00:46:58.130
But it but then the advantages are that
00:46:58.130 --> 00:46:59.570
you don't have to store any features.
00:47:00.250 --> 00:47:01.720
And you can also apply data
00:47:01.720 --> 00:47:03.350
augmentation, so you can create like
00:47:03.350 --> 00:47:05.910
there's random variations each time you
00:47:05.910 --> 00:47:08.500
process the training data and pass it
00:47:08.500 --> 00:47:10.470
and then like process it through the
00:47:10.470 --> 00:47:10.780
network.
00:47:12.040 --> 00:47:14.980
So this code is also pretty simple.
00:47:14.980 --> 00:47:15.560
You do.
00:47:16.920 --> 00:47:18.610
For each of your model parameters,
00:47:18.610 --> 00:47:20.650
first you set requires grad equals
00:47:20.650 --> 00:47:22.420
false, which means that it's not going
00:47:22.420 --> 00:47:23.640
to update them or compute the
00:47:23.640 --> 00:47:24.150
gradients.
00:47:25.090 --> 00:47:28.885
And then you just set the last layer
00:47:28.885 --> 00:47:33.020
the model that FC is a new layer and an
00:47:33.020 --> 00:47:33.630
linear.
00:47:33.630 --> 00:47:34.630
So this is if you're.
00:47:35.780 --> 00:47:38.050
Doing a Resnet 34 for example, where
00:47:38.050 --> 00:47:40.010
the output is 512 dimensional.
00:47:40.600 --> 00:47:41.970
And this would be mapping into eight
00:47:41.970 --> 00:47:43.340
classes in this example.
00:47:45.230 --> 00:47:47.615
And then since when you add a new layer
00:47:47.615 --> 00:47:49.810
it by default gradients are on South
00:47:49.810 --> 00:47:51.410
the gradients, so then when you train
00:47:51.410 --> 00:47:53.030
this network it won't change the
00:47:53.030 --> 00:47:54.670
encoder at all, it will only change
00:47:54.670 --> 00:47:56.310
your final classification layer.
00:47:57.240 --> 00:47:59.500
And this model, CUDA is just saying
00:47:59.500 --> 00:48:01.320
that you're going to be putting it into
00:48:01.320 --> 00:48:01.990
the GPU.
00:48:05.190 --> 00:48:05.510
Question.
00:48:17.580 --> 00:48:20.550
So you're so the question is, what does
00:48:20.550 --> 00:48:21.680
it mean to train if you're not updating
00:48:21.680 --> 00:48:22.100
the weights?
00:48:22.100 --> 00:48:23.610
Well, you're just updating these
00:48:23.610 --> 00:48:24.870
weights, the decoder weights.
00:48:25.530 --> 00:48:27.340
So you're training the last linear
00:48:27.340 --> 00:48:29.220
layer, but these are not changing at
00:48:29.220 --> 00:48:31.670
all, so it's producing the same.
00:48:31.670 --> 00:48:33.280
The features that it produces for a
00:48:33.280 --> 00:48:35.236
given image doesn't change during the
00:48:35.236 --> 00:48:37.300
training, but then the classification
00:48:37.300 --> 00:48:40.020
from those features into your class
00:48:40.020 --> 00:48:41.130
scores does change.
00:48:47.240 --> 00:48:49.330
Alright, so the next the next solution
00:48:49.330 --> 00:48:50.830
is called fine tuning.
00:48:50.830 --> 00:48:52.770
That is also like a term that you will
00:48:52.770 --> 00:48:54.470
run into all the time without any
00:48:54.470 --> 00:48:55.160
explanation.
00:48:57.030 --> 00:48:59.880
So this is actually really unintuitive.
00:49:00.790 --> 00:49:03.700
Or it may be intuitive, but it's kind
00:49:03.700 --> 00:49:04.260
of not.
00:49:04.260 --> 00:49:05.860
You wouldn't necessarily think this
00:49:05.860 --> 00:49:07.730
would work if you didn't know it works.
00:49:09.020 --> 00:49:10.880
So the idea of fine tuning is actually
00:49:10.880 --> 00:49:12.480
just you allow all the weights to
00:49:12.480 --> 00:49:13.090
change.
00:49:13.970 --> 00:49:15.610
But you just set a much smaller
00:49:15.610 --> 00:49:18.050
learning rate, so it can't change as
00:49:18.050 --> 00:49:21.070
easily or it won't change as much, so
00:49:21.070 --> 00:49:21.970
it's a little.
00:49:21.970 --> 00:49:24.200
What's weird about it is that usually
00:49:24.200 --> 00:49:26.215
when you have an optimization problem,
00:49:26.215 --> 00:49:27.880
you want to try to like.
00:49:28.600 --> 00:49:29.950
Solve that.
00:49:31.170 --> 00:49:32.320
You want to like solve that
00:49:32.320 --> 00:49:33.690
optimization problem as well as
00:49:33.690 --> 00:49:35.110
possible to get the best score in your
00:49:35.110 --> 00:49:35.620
objective.
00:49:36.390 --> 00:49:38.980
And in this case, you're actually
00:49:38.980 --> 00:49:40.492
trying to hit a local minima.
00:49:40.492 --> 00:49:42.780
You're trying to get into a suboptimal
00:49:42.780 --> 00:49:44.200
solution according to your objective
00:49:44.200 --> 00:49:46.790
function by starting with what you
00:49:46.790 --> 00:49:49.070
think a priori is a good solution and
00:49:49.070 --> 00:49:50.920
allowing it to not drift too far from
00:49:50.920 --> 00:49:51.600
that solution.
00:49:52.960 --> 00:49:55.410
So there's for example, like, let's
00:49:55.410 --> 00:49:56.965
suppose that you had this thing trained
00:49:56.965 --> 00:49:57.890
on Imagenet.
00:49:57.890 --> 00:50:00.356
It's trained on like millions of images
00:50:00.356 --> 00:50:01.616
on thousands of classes.
00:50:01.616 --> 00:50:03.010
So you're pretty confident that it
00:50:03.010 --> 00:50:04.900
learned a really good representation.
00:50:04.900 --> 00:50:06.665
But you have some new data set where
00:50:06.665 --> 00:50:08.916
you have 10 different classes and you
00:50:08.916 --> 00:50:11.055
have 100 images for each of those ten
00:50:11.055 --> 00:50:11.680
classes.
00:50:11.680 --> 00:50:13.950
Now that's like not enough data to
00:50:13.950 --> 00:50:15.866
really learn your encoder.
00:50:15.866 --> 00:50:18.630
It's not enough data to learn this big
00:50:18.630 --> 00:50:19.170
like.
00:50:20.130 --> 00:50:21.860
Million parameter network.
00:50:23.410 --> 00:50:25.200
On the other hand, maybe Imagenet is
00:50:25.200 --> 00:50:27.130
not the perfect representation for your
00:50:27.130 --> 00:50:29.870
new tasks, so you want to allow the
00:50:29.870 --> 00:50:31.570
training to just kind of tweak your
00:50:31.570 --> 00:50:34.550
encoding, to tweak your deep network
00:50:34.550 --> 00:50:37.520
and to learn a new linear layer, but
00:50:37.520 --> 00:50:40.459
not to totally redo the network.
00:50:41.540 --> 00:50:44.470
So it's like a really hacky solution,
00:50:44.470 --> 00:50:46.210
but it works really in practice that
00:50:46.210 --> 00:50:47.835
you just set a lower learning rate.
00:50:47.835 --> 00:50:50.190
So you say you use like a 10X smaller
00:50:50.190 --> 00:50:51.250
learning rate than normal.
00:50:51.950 --> 00:50:53.790
And you train it just you normally
00:50:53.790 --> 00:50:54.080
would.
00:50:54.080 --> 00:50:55.740
So you download the model, you start
00:50:55.740 --> 00:50:57.390
with that, use that as your starting
00:50:57.390 --> 00:50:59.420
point, and then you train it with a low
00:50:59.420 --> 00:51:01.330
learning rate and that's it.
00:51:03.770 --> 00:51:05.770
So how in more detail?
00:51:06.510 --> 00:51:08.630
Load the train model just like before I
00:51:08.630 --> 00:51:09.890
replace the last layer.
00:51:09.890 --> 00:51:11.750
Set a lower learning rate so it could
00:51:11.750 --> 00:51:13.710
be like east to the -, 4 for example,
00:51:13.710 --> 00:51:15.690
instead of east to the -, 3 or E to the
00:51:15.690 --> 00:51:20.010
-, 2 meaning like .00010001.
00:51:20.930 --> 00:51:21.420
00:51:22.080 --> 00:51:23.939
The learning rate typically is like 10
00:51:23.940 --> 00:51:25.520
times smaller than what you would use
00:51:25.520 --> 00:51:27.220
if you were training something from
00:51:27.220 --> 00:51:29.000
scratch from random initialization.
00:51:30.090 --> 00:51:32.190
One trick that can help is that
00:51:32.190 --> 00:51:34.230
sometimes you would want to do like the
00:51:34.230 --> 00:51:35.840
freezing method to train your last
00:51:35.840 --> 00:51:38.510
layer classifier first and then you and
00:51:38.510 --> 00:51:40.090
then you start tuning the whole
00:51:40.090 --> 00:51:40.510
network.
00:51:40.510 --> 00:51:42.550
And the reason for that is that when
00:51:42.550 --> 00:51:45.160
you first like add the last layer in.
00:51:45.990 --> 00:51:48.070
For your new task, it's random weights,
00:51:48.070 --> 00:51:50.310
so it's a really bad classifier.
00:51:50.970 --> 00:51:52.950
So it's going to be sending all kinds
00:51:52.950 --> 00:51:54.790
of gradients back into the network
00:51:54.790 --> 00:51:56.950
based on its own terrible
00:51:56.950 --> 00:51:58.310
classification ability.
00:51:59.230 --> 00:52:00.970
And that will start to like mess up
00:52:00.970 --> 00:52:02.530
your really nice encoder.
00:52:03.280 --> 00:52:05.870
And so it can be better to 1st train
00:52:05.870 --> 00:52:08.740
the last layer and then like allow the
00:52:08.740 --> 00:52:11.480
encoder to start training so that it's
00:52:11.480 --> 00:52:14.650
getting more meaningful weight update
00:52:14.650 --> 00:52:15.830
signals from the classifier.
00:52:17.960 --> 00:52:19.833
The other the other trick you can do is
00:52:19.833 --> 00:52:21.500
you can set a different learning rate
00:52:21.500 --> 00:52:23.050
for the earlier layers than you do for
00:52:23.050 --> 00:52:25.300
the later layers or the final
00:52:25.300 --> 00:52:29.790
classifier, with the justification that
00:52:29.790 --> 00:52:31.240
your last layer is something that you
00:52:31.240 --> 00:52:33.272
need to train from scratch, so it needs
00:52:33.272 --> 00:52:35.356
to change a lot, but the earlier layers
00:52:35.356 --> 00:52:36.860
you don't want to change too much.
00:52:39.560 --> 00:52:42.490
Wagg created this notebook which shows
00:52:42.490 --> 00:52:44.160
like how you can customize the learning
00:52:44.160 --> 00:52:46.490
rate per layer, how you can initialize
00:52:46.490 --> 00:52:49.110
weights, freeze different parts of the
00:52:49.110 --> 00:52:49.880
network.
00:52:49.880 --> 00:52:52.140
So I'm not going to go through it in
00:52:52.140 --> 00:52:54.050
class, but it's a good thing to check
00:52:54.050 --> 00:52:55.980
out if you're interested in those
00:52:55.980 --> 00:52:56.490
details.
00:52:58.000 --> 00:53:01.080
So this code is like this is the fine
00:53:01.080 --> 00:53:01.920
tuning code.
00:53:01.920 --> 00:53:03.600
I'm well, I'm missing the training but
00:53:03.600 --> 00:53:05.120
the training is the same as it would be
00:53:05.120 --> 00:53:06.190
for training from scratch.
00:53:07.030 --> 00:53:09.300
You set the number of target classes,
00:53:09.300 --> 00:53:10.620
you load a model.
00:53:12.540 --> 00:53:14.410
And then you just replace the last
00:53:14.410 --> 00:53:16.820
layer with your new layer that has.
00:53:16.820 --> 00:53:18.170
This should have the same number of
00:53:18.170 --> 00:53:20.330
features that this model produces.
00:53:21.060 --> 00:53:22.600
And output into the number of target
00:53:22.600 --> 00:53:23.060
classes.
00:53:24.200 --> 00:53:28.725
So it's like it's just so easy like to
00:53:28.725 --> 00:53:32.000
train to get to train a new vision
00:53:32.000 --> 00:53:33.980
classifier for your task once you have
00:53:33.980 --> 00:53:36.105
the data, once you have the images and
00:53:36.105 --> 00:53:38.957
the labels, it's like boilerplate code
00:53:38.957 --> 00:53:41.494
to like train anything for that and it
00:53:41.494 --> 00:53:43.860
will work pretty well and it will make
00:53:43.860 --> 00:53:46.630
use of this massive Imagenet data set
00:53:46.630 --> 00:53:48.140
that has like trained a really good
00:53:48.140 --> 00:53:48.800
encoder for you.
00:53:50.080 --> 00:53:50.440
Question.
00:53:58.520 --> 00:54:01.120
The last layer because it's totally New
00:54:01.120 --> 00:54:02.690
South the.
00:54:03.730 --> 00:54:05.770
So here the last layer is you're new
00:54:05.770 --> 00:54:06.160
decoder.
00:54:06.160 --> 00:54:08.692
It's your linear classifier for your
00:54:08.692 --> 00:54:09.718
new task.
00:54:09.718 --> 00:54:14.340
So you have to train that layer in
00:54:14.340 --> 00:54:15.660
order to do your new classification
00:54:15.660 --> 00:54:15.990
task.
00:54:16.720 --> 00:54:19.860
While this was already initialized by
00:54:19.860 --> 00:54:20.550
Imagenet.
00:54:21.390 --> 00:54:23.340
It's not really that common to set
00:54:23.340 --> 00:54:24.810
different learning rates for different
00:54:24.810 --> 00:54:26.020
parts of the encoder.
00:54:26.020 --> 00:54:28.050
So you could say like maybe the later
00:54:28.050 --> 00:54:30.300
layers are like because they represent
00:54:30.300 --> 00:54:31.503
higher level features.
00:54:31.503 --> 00:54:34.630
Are like should change more where the
00:54:34.630 --> 00:54:36.247
earlier layers are representing simpler
00:54:36.247 --> 00:54:37.790
features that are going to be more
00:54:37.790 --> 00:54:38.640
generally useful.
00:54:39.650 --> 00:54:41.650
But it's not really common to do that.
00:54:41.650 --> 00:54:43.996
There's some justification, but it's
00:54:43.996 --> 00:54:44.990
not common practice.
00:54:53.760 --> 00:54:56.820
They're like higher level features.
00:54:56.820 --> 00:54:59.660
I'll show some I can share some
00:54:59.660 --> 00:55:01.320
examples of, like what they represent.
00:55:02.460 --> 00:55:03.960
So this is just showing.
00:55:04.060 --> 00:55:04.650
00:55:05.930 --> 00:55:10.005
The If you look at the performance of
00:55:10.005 --> 00:55:12.630
these different transfer methods as you
00:55:12.630 --> 00:55:14.498
vary the number of training samples.
00:55:14.498 --> 00:55:16.210
So here it's shown where it's showing
00:55:16.210 --> 00:55:18.190
the axis is actually 1 divided by the
00:55:18.190 --> 00:55:20.000
number of training samples per class or
00:55:20.000 --> 00:55:21.850
square root of that, so that these end
00:55:21.850 --> 00:55:24.310
up being like roughly linear curves.
00:55:25.510 --> 00:55:27.650
But for example, this means that
00:55:27.650 --> 00:55:29.731
there's 400 training samples per Class,
00:55:29.731 --> 00:55:31.686
100 training examples per class, 45
00:55:31.686 --> 00:55:32.989
training examples per class.
00:55:33.700 --> 00:55:36.160
The green is if you train from scratch
00:55:36.160 --> 00:55:37.930
so and this is the error.
00:55:37.930 --> 00:55:40.060
So if you have very few examples, then
00:55:40.060 --> 00:55:41.930
training from scratch performs really
00:55:41.930 --> 00:55:43.610
badly because you don't have enough
00:55:43.610 --> 00:55:44.820
examples to learn the encoder.
00:55:45.480 --> 00:55:46.770
But it does better as you get more
00:55:46.770 --> 00:55:48.260
examples like pretty sharply.
00:55:49.460 --> 00:55:52.200
If you use your pre trained model and
00:55:52.200 --> 00:55:54.260
linear probe you get the blue line.
00:55:54.260 --> 00:55:56.370
So if you have lots of examples that
00:55:56.370 --> 00:55:58.430
doesn't do as well, but if you have a
00:55:58.430 --> 00:56:00.956
few examples it can do quite well, it
00:56:00.956 --> 00:56:02.090
can be your best solution.
00:56:03.450 --> 00:56:05.146
And then the purple line is if you fine
00:56:05.146 --> 00:56:06.816
tune so you pre trained on Imagenet and
00:56:06.816 --> 00:56:09.360
then you fine tune to in this case C
00:56:09.360 --> 00:56:11.290
four 100 which is 100 different object
00:56:11.290 --> 00:56:16.160
classes and you that generally works
00:56:16.160 --> 00:56:16.940
the best.
00:56:18.040 --> 00:56:20.580
It generally outperforms the linear
00:56:20.580 --> 00:56:22.030
model, except when you have very, very
00:56:22.030 --> 00:56:23.590
few training examples per class.
00:56:24.200 --> 00:56:25.640
And then this is showing the same thing
00:56:25.640 --> 00:56:27.900
on a different data set, which is a
00:56:27.900 --> 00:56:29.990
much larger data set for place
00:56:29.990 --> 00:56:30.550
recognition.
00:56:32.590 --> 00:56:34.050
So it's a little late for it, but I'll
00:56:34.050 --> 00:56:36.180
do it anyway so we can take a quick
00:56:36.180 --> 00:56:38.029
break and if you would like, you can
00:56:38.030 --> 00:56:40.130
think about this question, which I'll
00:56:40.130 --> 00:56:41.500
answer after the break.
00:56:53.360 --> 00:56:55.420
You can do your own task as well.
00:57:00.350 --> 00:57:01.440
Challenges 1/2.
00:57:03.020 --> 00:57:05.430
Yeah, it says here you can choose pre
00:57:05.430 --> 00:57:07.010
selected challenge, select your own
00:57:07.010 --> 00:57:08.499
benchmark task or create your own
00:57:08.500 --> 00:57:09.290
custom task.
00:57:09.690 --> 00:57:11.080
Yeah.
00:57:13.000 --> 00:57:16.150
So the red one is if you use a randomly
00:57:16.150 --> 00:57:17.990
initialized network and then you just
00:57:17.990 --> 00:57:18.940
train a linear model.
00:57:21.780 --> 00:57:23.250
It's a worse this is error.
00:57:25.230 --> 00:57:25.520
Yeah.
00:57:48.800 --> 00:57:50.230
Just because you can't see it in the
00:57:50.230 --> 00:57:51.950
table but the first layer, they don't
00:57:51.950 --> 00:57:52.960
apply downsampling.
00:57:54.730 --> 00:57:57.480
So in the first block it's just like
00:57:57.480 --> 00:57:59.340
processing features without down
00:57:59.340 --> 00:57:59.840
sampling.
00:58:00.980 --> 00:58:02.876
Yeah, the table doesn't show where the
00:58:02.876 --> 00:58:04.450
downsampling's happening except through
00:58:04.450 --> 00:58:05.940
the size changing.
00:58:05.940 --> 00:58:09.470
But if you look here, it's like down
00:58:09.470 --> 00:58:10.970
sample falls, down sample falls.
00:58:11.550 --> 00:58:13.080
So it doesn't downsample in the first
00:58:13.080 --> 00:58:15.396
layer and then it down sample true,
00:58:15.396 --> 00:58:16.999
down sample true, down sample true.
00:58:32.470 --> 00:58:33.760
Maybe I can answer after.
00:58:36.060 --> 00:58:36.570
All right.
00:58:36.570 --> 00:58:39.670
So does anybody have like a simple
00:58:39.670 --> 00:58:41.590
explanation for this question?
00:58:41.590 --> 00:58:44.392
So why does when does each one have an
00:58:44.392 --> 00:58:46.593
advantage and why does it have an
00:58:46.593 --> 00:58:48.000
advantage, I think, in terms of the
00:58:48.000 --> 00:58:48.950
amount of training data?
00:58:58.690 --> 00:59:01.120
So you can think about this in terms of
00:59:01.120 --> 00:59:02.900
the bias variance tradeoff, right?
00:59:02.900 --> 00:59:04.420
So if you have a lot of data.
00:59:05.370 --> 00:59:07.752
Then you can afford to have a higher
00:59:07.752 --> 00:59:09.860
bias class or a lower bias classifier
00:59:09.860 --> 00:59:12.305
that has higher variance because the
00:59:12.305 --> 00:59:13.910
data will reduce that variance.
00:59:13.910 --> 00:59:16.290
Your ultimate variance depends on the
00:59:16.290 --> 00:59:17.690
complexity of the model as well as the
00:59:17.690 --> 00:59:18.570
amount of data you have.
00:59:19.480 --> 00:59:22.360
And so if you have very little data
00:59:22.360 --> 00:59:24.260
then linear probe might be your best
00:59:24.260 --> 00:59:26.680
solution because all your training is
00:59:26.680 --> 00:59:28.185
that last classification layer.
00:59:28.185 --> 00:59:30.630
So use your limited data to train just
00:59:30.630 --> 00:59:32.190
a linear model so.
00:59:33.210 --> 00:59:35.560
If you look at the blue curve, the blue
00:59:35.560 --> 00:59:38.100
curve starts to outperform when you
00:59:38.100 --> 00:59:40.130
have like less than 16 examples per
00:59:40.130 --> 00:59:40.760
class.
00:59:40.760 --> 00:59:42.770
Then it achieves the lower lowest
00:59:42.770 --> 00:59:43.090
error.
00:59:44.030 --> 00:59:45.900
So if you have very limited data, then
00:59:45.900 --> 00:59:47.420
just training the linear probe and
00:59:47.420 --> 00:59:49.140
trusting your encoding may be best.
00:59:50.980 --> 00:59:53.480
If you have like a pretty good amount
00:59:53.480 --> 00:59:54.060
of data.
00:59:54.720 --> 00:59:57.090
Then fine tuning is the best.
00:59:57.800 --> 00:59:59.833
Because you're starting with that
00:59:59.833 --> 01:00:01.590
initial solution from the encoder and
01:00:01.590 --> 01:00:03.410
allowing it to drift some, but not too
01:00:03.410 --> 01:00:03.680
much.
01:00:03.680 --> 01:00:05.400
So you're kind of constraining it based
01:00:05.400 --> 01:00:06.710
on the initial encoding that you
01:00:06.710 --> 01:00:07.720
learned from lots of data.
01:00:08.540 --> 01:00:13.400
And so the purple curve which is fine
01:00:13.400 --> 01:00:15.970
tuning works the best for like a big
01:00:15.970 --> 01:00:19.176
section of the this X axis, which is
01:00:19.176 --> 01:00:20.400
the amount of training data that you
01:00:20.400 --> 01:00:20.760
have.
01:00:21.670 --> 01:00:24.790
But if you have a ton of data, then
01:00:24.790 --> 01:00:26.850
there's no point fine tuning from some
01:00:26.850 --> 01:00:28.520
other data set if you have way more
01:00:28.520 --> 01:00:30.060
data than Imagenet for example.
01:00:30.690 --> 01:00:32.070
Then you should be able to train from
01:00:32.070 --> 01:00:34.530
scratch and optimize as possible and
01:00:34.530 --> 01:00:36.440
get a better encoding than if you just
01:00:36.440 --> 01:00:38.280
like kind of tie your encoding to the
01:00:38.280 --> 01:00:39.400
initial one from Imagenet.
01:00:40.260 --> 01:00:42.415
And so training from scratch can work
01:00:42.415 --> 01:00:44.663
the best if you have the most if you
01:00:44.663 --> 01:00:46.790
have a lot of data like Imagenet scale
01:00:46.790 --> 01:00:47.130
data.
01:00:52.680 --> 01:00:55.290
So I'm going to talk give you a sense
01:00:55.290 --> 01:00:57.960
of how detection works with these deep
01:00:57.960 --> 01:00:58.810
networks.
01:00:58.940 --> 01:00:59.470
01:01:00.530 --> 01:01:01.710
And actually I'm going to go a little
01:01:01.710 --> 01:01:02.260
bit out of order.
01:01:02.260 --> 01:01:03.450
Let me come back to that in just a
01:01:03.450 --> 01:01:03.740
second.
01:01:04.560 --> 01:01:05.720
Because.
01:01:07.660 --> 01:01:10.149
I want to show you what the
01:01:10.150 --> 01:01:11.990
visualization of what the network's
01:01:11.990 --> 01:01:12.520
learning.
01:01:12.520 --> 01:01:14.380
This applies to classification as well.
01:01:15.100 --> 01:01:17.020
So to create this visualization, these
01:01:17.020 --> 01:01:20.480
researchers they back propagate the
01:01:20.480 --> 01:01:21.960
gradients through the network so that
01:01:21.960 --> 01:01:24.300
they can see which pixels are causing a
01:01:24.300 --> 01:01:26.100
feature to be activated to have like a
01:01:26.100 --> 01:01:26.690
high value.
01:01:27.600 --> 01:01:28.900
And here they're showing the
01:01:28.900 --> 01:01:33.310
activations and the image patches that
01:01:33.310 --> 01:01:35.250
like strongly activated particularly
01:01:35.250 --> 01:01:35.870
features.
01:01:35.870 --> 01:01:38.040
So you can see that in the first layer
01:01:38.040 --> 01:01:40.360
of the network and this is 2014 before
01:01:40.360 --> 01:01:40.575
resna.
01:01:40.575 --> 01:01:42.530
So this is for like Alex, net style
01:01:42.530 --> 01:01:44.200
networks in this particular example.
01:01:44.840 --> 01:01:47.588
But in the early layers of the network,
01:01:47.588 --> 01:01:50.080
the network is basically representing
01:01:50.080 --> 01:01:51.110
color and edges.
01:01:51.110 --> 01:01:53.000
So like each of these three by three
01:01:53.000 --> 01:01:55.460
blocks are like patches that had high
01:01:55.460 --> 01:01:57.990
response to a particular feature.
01:01:57.990 --> 01:01:59.790
So one of them is just like green,
01:01:59.790 --> 01:02:01.180
another one is whether it's blue or
01:02:01.180 --> 01:02:03.640
orange, another one is like this, these
01:02:03.640 --> 01:02:04.430
bar features.
01:02:05.210 --> 01:02:06.849
And a lot of these actually look a lot
01:02:06.850 --> 01:02:10.280
like the filters that happened in early
01:02:10.280 --> 01:02:11.500
processing in the brain.
01:02:13.800 --> 01:02:14.380
01:02:15.130 --> 01:02:16.650
The brain doesn't do convolution
01:02:16.650 --> 01:02:18.890
exactly, but actually it's essentially
01:02:18.890 --> 01:02:21.220
the same does it does convolution by
01:02:21.220 --> 01:02:21.800
other means.
01:02:21.800 --> 01:02:24.070
Basically you can show like the.
01:02:24.790 --> 01:02:27.270
What sensitivity of neurons to
01:02:27.270 --> 01:02:27.840
stimulate?
01:02:29.060 --> 01:02:32.720
And the then this is layer two, so you
01:02:32.720 --> 01:02:34.918
start to get like texture patterns like
01:02:34.918 --> 01:02:36.026
this one.
01:02:36.026 --> 01:02:38.230
One node is responsive to these
01:02:38.230 --> 01:02:38.740
stripes.
01:02:39.370 --> 01:02:41.330
Another one to like these thin lines.
01:02:41.330 --> 01:02:43.770
Another one is to yellow with some
01:02:43.770 --> 01:02:45.180
corner features.
01:02:46.340 --> 01:02:47.810
So they're like kind of like texture
01:02:47.810 --> 01:02:50.170
and low to mid level features and the
01:02:50.170 --> 01:02:50.700
next layer.
01:02:52.460 --> 01:02:53.940
And then in the next layer, it's like
01:02:53.940 --> 01:02:55.615
more complex patterns like this one.
01:02:55.615 --> 01:02:58.200
It's responsible to like responsive to
01:02:58.200 --> 01:03:00.340
like grids in different orientations as
01:03:00.340 --> 01:03:01.940
well as like grids of these circles.
01:03:03.860 --> 01:03:07.050
This one is starting is responsive to
01:03:07.050 --> 01:03:08.410
people's torso and head.
01:03:09.750 --> 01:03:11.979
Then there's like some that are like
01:03:11.980 --> 01:03:13.610
text, so they're starting to become
01:03:13.610 --> 01:03:15.638
more objectivity in what they're
01:03:15.638 --> 01:03:17.420
responsive, what these network nodes
01:03:17.420 --> 01:03:18.350
are responsive to.
01:03:19.220 --> 01:03:20.650
And you can see that for example.
01:03:21.380 --> 01:03:23.955
These ones that are firing on these
01:03:23.955 --> 01:03:26.256
grids, the active part are the lines
01:03:26.256 --> 01:03:27.960
are the lines on the grids.
01:03:27.960 --> 01:03:30.310
So it's actually and for these people
01:03:30.310 --> 01:03:32.550
it's responding heavily to their faces.
01:03:35.000 --> 01:03:36.370
And then as you go deeper into the
01:03:36.370 --> 01:03:39.450
network, you start to get like object
01:03:39.450 --> 01:03:41.023
representations or object part
01:03:41.023 --> 01:03:41.499
representations.
01:03:41.500 --> 01:03:43.806
So there's a node for dogheads and
01:03:43.806 --> 01:03:48.205
there's the like curved part of circles
01:03:48.205 --> 01:03:51.370
and animal feed and birds that are
01:03:51.370 --> 01:03:51.910
swimming.
01:03:52.570 --> 01:03:56.769
And birds that are standing so and then
01:03:56.770 --> 01:03:58.470
this is layer four and then layer 5
01:03:58.470 --> 01:04:00.140
again is like more parts.
01:04:00.140 --> 01:04:02.090
So as you go through the network the
01:04:02.090 --> 01:04:04.373
representation start to represent more
01:04:04.373 --> 01:04:07.110
like objects, object level features
01:04:07.110 --> 01:04:08.280
where the earlier layers are
01:04:08.280 --> 01:04:09.970
representing like color and texture.
01:04:11.860 --> 01:04:14.560
There's lots of different ways of doing
01:04:14.560 --> 01:04:15.770
these visualizations that are
01:04:15.770 --> 01:04:17.270
interesting to look at, but that's just
01:04:17.270 --> 01:04:18.500
to give you like some sense.
01:04:52.410 --> 01:04:54.052
Yes, I think the question is like
01:04:54.052 --> 01:04:55.590
whether you should whether you should
01:04:55.590 --> 01:04:56.895
make use of color information because
01:04:56.895 --> 01:04:58.940
it's not that predictive I guess
01:04:58.940 --> 01:04:59.410
earlier.
01:04:59.410 --> 01:05:04.220
So in the when I was like in 2004 often
01:05:04.220 --> 01:05:05.690
people would train face detectors on
01:05:05.690 --> 01:05:07.200
grayscale images for that reason
01:05:07.200 --> 01:05:08.300
because they said color can be
01:05:08.300 --> 01:05:10.373
misleading and it's really depends on
01:05:10.373 --> 01:05:11.345
the amount of data.
01:05:11.345 --> 01:05:14.265
So the networks can learn like how much
01:05:14.265 --> 01:05:16.040
I trust color essentially.
01:05:16.040 --> 01:05:18.460
And so if you have like lots of data or
01:05:18.460 --> 01:05:20.280
you train a good encoder with Imagenet
01:05:20.280 --> 01:05:22.240
then there's no need to like.
01:05:22.300 --> 01:05:24.010
Grayscale your images.
01:05:24.010 --> 01:05:25.415
You just give the network essentially
01:05:25.415 --> 01:05:26.910
the choice of whether to use that
01:05:26.910 --> 01:05:27.700
information or not.
01:05:28.600 --> 01:05:29.930
So that would be like the current
01:05:29.930 --> 01:05:31.560
thinking on that, but it's a good
01:05:31.560 --> 01:05:32.110
question.
01:05:37.020 --> 01:05:37.430
All right.
01:05:37.430 --> 01:05:39.270
So I'm going to talk a little bit about
01:05:39.270 --> 01:05:41.590
object detection and I'll probably have
01:05:41.590 --> 01:05:44.200
to pick up, pick up some of this at the
01:05:44.200 --> 01:05:45.410
next class, but that's fine.
01:05:46.710 --> 01:05:48.880
So this is how object detection works
01:05:48.880 --> 01:05:49.420
in general.
01:05:49.420 --> 01:05:50.397
It's called like this.
01:05:50.397 --> 01:05:52.120
It's called a statistical template
01:05:52.120 --> 01:05:54.410
approach to object detection where you
01:05:54.410 --> 01:05:56.750
basically propose some windows where
01:05:56.750 --> 01:05:58.662
you think the object might be and this
01:05:58.662 --> 01:06:00.680
can either be like a very brute force
01:06:00.680 --> 01:06:03.589
dumb extract every patch, or you can
01:06:03.590 --> 01:06:06.480
use some segmentation methods to get
01:06:06.480 --> 01:06:08.630
like groups of pixels that have similar
01:06:08.630 --> 01:06:10.920
colors and then put boxes around those.
01:06:11.610 --> 01:06:13.170
But either way, you get a set of
01:06:13.170 --> 01:06:15.250
locations in the image that's like a
01:06:15.250 --> 01:06:18.374
bounding box A2 corners in the image
01:06:18.374 --> 01:06:21.310
that you think that the object some
01:06:21.310 --> 01:06:22.995
object of interest might be inside of
01:06:22.995 --> 01:06:23.510
that box.
01:06:24.550 --> 01:06:26.290
You extract the features within that
01:06:26.290 --> 01:06:26.660
box.
01:06:26.660 --> 01:06:28.705
So you could use hog features which we
01:06:28.705 --> 01:06:31.235
talked about for SVM which all Triggs
01:06:31.235 --> 01:06:32.960
could be these horror.
01:06:32.960 --> 01:06:34.756
These are called hard wavelet features
01:06:34.756 --> 01:06:36.760
that we use with boosting for face
01:06:36.760 --> 01:06:39.500
detection by Viola Jones or CNN
01:06:39.500 --> 01:06:40.930
features which we just talked about.
01:06:41.930 --> 01:06:44.110
Then you classify those features
01:06:44.110 --> 01:06:45.780
independently for each patch.
01:06:46.780 --> 01:06:48.490
And then you have some post process,
01:06:48.490 --> 01:06:50.390
because neighboring patches might have
01:06:50.390 --> 01:06:51.680
similar scores because they're
01:06:51.680 --> 01:06:53.520
overlapping and so you want to take the
01:06:53.520 --> 01:06:54.570
one with the highest score.
01:06:55.970 --> 01:06:57.805
And this is just generally how many
01:06:57.805 --> 01:06:59.510
many object detection methods work.
01:07:00.940 --> 01:07:04.570
So the first like big foray into deep
01:07:04.570 --> 01:07:07.540
networks with this approach was R CNN
01:07:07.540 --> 01:07:08.470
by Girshick Adal.
01:07:09.610 --> 01:07:11.870
And you take an input image.
01:07:11.870 --> 01:07:14.300
They use this method to extract boxes
01:07:14.300 --> 01:07:15.910
called selective search.
01:07:15.910 --> 01:07:17.480
Details aren't really that important,
01:07:17.480 --> 01:07:19.780
but you get around 2000 different
01:07:19.780 --> 01:07:21.400
windows that might contain objects.
01:07:22.390 --> 01:07:26.110
You warp them into a 224 by 24 patch or
01:07:26.110 --> 01:07:28.270
I guess 227 by 27.
01:07:28.270 --> 01:07:29.250
This is just because of.
01:07:29.250 --> 01:07:31.480
That's the size of like the Imagenet
01:07:31.480 --> 01:07:34.880
classifier would process, including
01:07:34.880 --> 01:07:35.410
some padding.
01:07:37.020 --> 01:07:39.150
Then you put this through your image
01:07:39.150 --> 01:07:41.628
net classifier and extract the
01:07:41.628 --> 01:07:45.600
features, and then you train a SVM to
01:07:45.600 --> 01:07:47.763
classify those features into each of
01:07:47.763 --> 01:07:48.890
the classes of interests.
01:07:48.890 --> 01:07:50.690
Like is it an airplane, is it a person,
01:07:50.690 --> 01:07:51.710
is it a TV monitor?
01:07:53.390 --> 01:07:56.550
That this was like the first like
01:07:56.550 --> 01:08:00.280
really amazing demonstration of fine
01:08:00.280 --> 01:08:00.620
tuning.
01:08:00.620 --> 01:08:02.560
So they would also fine tune this CNN
01:08:02.560 --> 01:08:04.605
classifier to do better, so they would
01:08:04.605 --> 01:08:06.250
fine tune it to do this task.
01:08:06.250 --> 01:08:08.160
So this was like very surprising at the
01:08:08.160 --> 01:08:09.730
time that you could take an image
01:08:09.730 --> 01:08:11.984
classification method and then you fine
01:08:11.984 --> 01:08:13.836
tune it, just set a lower learning rate
01:08:13.836 --> 01:08:15.740
and then adapt it to this new task
01:08:15.740 --> 01:08:16.800
where you didn't have that much
01:08:16.800 --> 01:08:17.420
training data.
01:08:17.420 --> 01:08:18.550
Relatively.
01:08:18.550 --> 01:08:20.120
If you train it from scratch it doesn't
01:08:20.120 --> 01:08:20.800
work that well.
01:08:20.800 --> 01:08:23.255
You have to start with the Imagenet pre
01:08:23.255 --> 01:08:23.750
training.
01:08:23.850 --> 01:08:25.890
Classifier and then go from there and
01:08:25.890 --> 01:08:27.630
then they got like amazing results.
01:08:30.180 --> 01:08:32.520
The next step was like there's a
01:08:32.520 --> 01:08:34.170
glaring inefficiency here, which is
01:08:34.170 --> 01:08:36.290
that you extract each patch and then
01:08:36.290 --> 01:08:38.172
pass that patch through the network.
01:08:38.172 --> 01:08:40.682
So you have to you have 2000 patches,
01:08:40.682 --> 01:08:42.613
and each of those 2000 patches you have
01:08:42.613 --> 01:08:44.760
to put through your network in order to
01:08:44.760 --> 01:08:45.410
detection.
01:08:45.410 --> 01:08:47.150
So super, super slow.
01:08:48.120 --> 01:08:50.010
So their next step is that you apply
01:08:50.010 --> 01:08:52.240
the network to the image and then you
01:08:52.240 --> 01:08:54.270
just extract patches within the feature
01:08:54.270 --> 01:08:58.776
maps at like a later layer in the
01:08:58.776 --> 01:08:59.449
network.
01:09:00.230 --> 01:09:02.430
And so that just made it really, really
01:09:02.430 --> 01:09:02.940
fast.
01:09:02.940 --> 01:09:04.420
You get similar performance.
01:09:05.420 --> 01:09:07.440
They also added something where you
01:09:07.440 --> 01:09:09.000
don't trust that initial window
01:09:09.000 --> 01:09:09.790
exactly.
01:09:09.790 --> 01:09:11.405
You predict something that like adjusts
01:09:11.405 --> 01:09:13.540
the corners of the box so you get
01:09:13.540 --> 01:09:15.320
better localization.
01:09:16.830 --> 01:09:18.870
That gave it 100 X speedup, where the
01:09:18.870 --> 01:09:21.395
first system ran at like 50.
01:09:21.395 --> 01:09:23.490
It could be as slow as.
01:09:24.570 --> 01:09:27.990
50 seconds per frame, as fast as 20
01:09:27.990 --> 01:09:28.780
frames per second.
01:09:29.740 --> 01:09:31.910
But on average, 100X speedup.
01:09:34.120 --> 01:09:39.150
That was fast CNN faster our CNN is
01:09:39.150 --> 01:09:41.330
that instead of using that selective
01:09:41.330 --> 01:09:43.020
search method to propose Windows, you
01:09:43.020 --> 01:09:45.370
also learn to propose like the boxes
01:09:45.370 --> 01:09:46.760
inside the image where you think the
01:09:46.760 --> 01:09:47.450
object might be.
01:09:48.080 --> 01:09:49.300
And they use what's called like a
01:09:49.300 --> 01:09:50.800
region proposal network, a small
01:09:50.800 --> 01:09:51.840
network that does that.
01:09:51.840 --> 01:09:53.230
So it takes some intermediate
01:09:53.230 --> 01:09:56.290
representation from the encoder, and
01:09:56.290 --> 01:09:58.520
then it predicts for each position what
01:09:58.520 --> 01:10:00.310
are some boxes around that might
01:10:00.310 --> 01:10:02.510
contain the object of interest or an
01:10:02.510 --> 01:10:03.470
object of interest.
01:10:04.490 --> 01:10:07.320
And then they use that for
01:10:07.320 --> 01:10:08.110
classification.
01:10:08.740 --> 01:10:10.730
And then this gave similar accuracy to
01:10:10.730 --> 01:10:12.550
the previous method, but gave another
01:10:12.550 --> 01:10:13.370
10X speedup.
01:10:13.370 --> 01:10:14.850
So now it's like pretty fast.
01:10:16.780 --> 01:10:19.890
And then the final one is mask R CNN,
01:10:19.890 --> 01:10:21.860
which is still really widely used
01:10:21.860 --> 01:10:22.220
today.
01:10:24.160 --> 01:10:26.470
And it's essentially the same network
01:10:26.470 --> 01:10:28.690
as faster R CNN, but they added
01:10:28.690 --> 01:10:29.900
additional branches to it.
01:10:30.940 --> 01:10:34.860
So in faster R CNN, for every initial
01:10:34.860 --> 01:10:38.050
window you would predict a class score
01:10:38.050 --> 01:10:39.820
for each of your classes and it could
01:10:39.820 --> 01:10:41.190
be their background or one of those
01:10:41.190 --> 01:10:41.690
classes.
01:10:42.350 --> 01:10:44.510
And you would predict a refined box to
01:10:44.510 --> 01:10:46.090
better focus on the object.
01:10:46.700 --> 01:10:48.230
And they added to it additional
01:10:48.230 --> 01:10:51.750
branches 1 predicts a pixel whether
01:10:51.750 --> 01:10:53.640
each pixel is on the object or not.
01:10:53.640 --> 01:10:55.543
So this is like the predicted mask for
01:10:55.543 --> 01:10:56.710
a car for example.
01:10:57.780 --> 01:11:02.030
And it's in a predicts it in a 28 by 28
01:11:02.030 --> 01:11:03.620
windows, so a small patch that then
01:11:03.620 --> 01:11:05.930
gets resized into the original window.
01:11:06.880 --> 01:11:09.310
And they also predict key points for
01:11:09.310 --> 01:11:09.985
people.
01:11:09.985 --> 01:11:13.990
And that's just like again a pixel map
01:11:13.990 --> 01:11:15.840
where you predict whether each pixel is
01:11:15.840 --> 01:11:17.778
like the left eye or whether each pixel
01:11:17.778 --> 01:11:19.569
is the right eye or the.
01:11:20.740 --> 01:11:22.530
Or the left hip or right hip and so on.
01:11:23.570 --> 01:11:25.310
And that gives you these key point
01:11:25.310 --> 01:11:25.910
predictions.
01:11:26.790 --> 01:11:29.050
So with the same network you're doing,
01:11:29.050 --> 01:11:31.660
detecting objects, segmenting out those
01:11:31.660 --> 01:11:34.040
objects or labeling their pixels and
01:11:34.040 --> 01:11:35.480
labeling the parts of people.
01:11:36.740 --> 01:11:39.130
And that same method was, at the time
01:11:39.130 --> 01:11:41.226
of release, the best object detector,
01:11:41.226 --> 01:11:43.464
the best instance segmentation method,
01:11:43.464 --> 01:11:46.060
and the best person keypoint detector.
01:11:47.100 --> 01:11:48.280
And it's still one of the most
01:11:48.280 --> 01:11:48.980
effective.
01:11:49.710 --> 01:11:51.900
So these are some examples of how it
01:11:51.900 --> 01:11:52.450
performs.
01:11:53.380 --> 01:11:56.770
So up here there's it's detecting
01:11:56.770 --> 01:11:59.715
Donuts and segmenting them, so all the
01:11:59.715 --> 01:12:01.030
different colors correspond to
01:12:01.030 --> 01:12:02.210
different doughnut regions.
01:12:03.400 --> 01:12:05.539
There's horses and know how, like it
01:12:05.540 --> 01:12:06.220
knows that.
01:12:06.220 --> 01:12:08.180
It's not saying that the fence is a
01:12:08.180 --> 01:12:08.570
horse.
01:12:08.570 --> 01:12:10.310
It segments around the fence.
01:12:11.650 --> 01:12:15.650
There's people here in bags like
01:12:15.650 --> 01:12:18.540
handbag, traffic lights.
01:12:18.540 --> 01:12:20.565
This is in the Coco data set, which has
01:12:20.565 --> 01:12:21.340
80 classes.
01:12:22.330 --> 01:12:22.710
Chairs.
01:12:22.710 --> 01:12:24.080
These are not ground truth.
01:12:24.080 --> 01:12:25.210
These are the predictions of the
01:12:25.210 --> 01:12:25.780
network.
01:12:25.780 --> 01:12:28.130
So it's really accurate at segmenting
01:12:28.130 --> 01:12:29.180
and detecting objects.
01:12:29.810 --> 01:12:31.420
And then these numbers are the scores,
01:12:31.420 --> 01:12:33.010
which are probably hard to see from the
01:12:33.010 --> 01:12:34.160
audience, but they're.
01:12:35.290 --> 01:12:36.280
They tend to be pretty high.
01:12:37.540 --> 01:12:38.730
Here's just more examples.
01:12:39.500 --> 01:12:42.000
So you this thing can detect lots of
01:12:42.000 --> 01:12:43.900
different classes and segment them and
01:12:43.900 --> 01:12:44.760
it can also.
01:12:45.400 --> 01:12:46.840
Predict the.
01:12:47.850 --> 01:12:48.970
The parts of the people.
01:12:50.850 --> 01:12:53.510
So the poses of the people that are
01:12:53.510 --> 01:12:54.985
then also segmented out.
01:12:54.985 --> 01:12:56.649
So this is used in many many
01:12:56.650 --> 01:12:58.600
applications like my non vision
01:12:58.600 --> 01:13:01.570
researchers, because it's very good at
01:13:01.570 --> 01:13:03.640
detecting people, segmenting people,
01:13:03.640 --> 01:13:06.529
finding their parts as well as you can
01:13:06.530 --> 01:13:09.005
adapt it to train, you can retrain it
01:13:09.005 --> 01:13:10.400
to do many other kinds of object
01:13:10.400 --> 01:13:10.930
detection.
01:13:19.640 --> 01:13:21.420
Sorry, I will finish this because I've
01:13:21.420 --> 01:13:22.290
got 2 minutes.
01:13:22.420 --> 01:13:22.900
01:13:23.700 --> 01:13:25.100
So the last thing I just want to
01:13:25.100 --> 01:13:27.540
briefly mention is, so those are like
01:13:27.540 --> 01:13:30.150
resna is like a real staple of deep
01:13:30.150 --> 01:13:32.183
learning and computer vision, one of
01:13:32.183 --> 01:13:34.080
the other kinds of architectures that's
01:13:34.080 --> 01:13:35.900
really widely used if you're trying to
01:13:35.900 --> 01:13:38.100
produce images or image maps, like
01:13:38.100 --> 01:13:39.955
label every pixel in the image into sky
01:13:39.955 --> 01:13:40.910
or tree or building.
01:13:41.600 --> 01:13:43.750
Is this thing called a unit
01:13:43.750 --> 01:13:44.510
architecture?
01:13:45.740 --> 01:13:47.240
So in the unit architecture.
01:13:48.960 --> 01:13:52.510
You process the image you start with
01:13:52.510 --> 01:13:54.090
like a high resolution image, you
01:13:54.090 --> 01:13:57.360
process it and justice like for Resnet
01:13:57.360 --> 01:13:59.930
or other architectures, you downsample
01:13:59.930 --> 01:14:01.840
it while deepening the features.
01:14:01.840 --> 01:14:03.502
So you make the image smaller and
01:14:03.502 --> 01:14:05.124
smaller spatially while making the
01:14:05.124 --> 01:14:06.260
features deeper and deeper.
01:14:07.610 --> 01:14:09.130
And then eventually you get to a big
01:14:09.130 --> 01:14:11.150
long vector of features, just like for
01:14:11.150 --> 01:14:12.391
Resnet and other architectures.
01:14:12.391 --> 01:14:15.439
And then you upsample it back into an
01:14:15.439 --> 01:14:17.769
image map and when you upsample it, you
01:14:17.770 --> 01:14:19.350
have skip connections from this
01:14:19.350 --> 01:14:21.362
corresponding layer of detail.
01:14:21.362 --> 01:14:23.670
So as it's like bringing it back into
01:14:23.670 --> 01:14:27.210
an image size output, you're adding
01:14:27.210 --> 01:14:29.970
back the features from the detail that
01:14:29.970 --> 01:14:31.840
was obtained when you down sampled it.
01:14:31.840 --> 01:14:34.313
And this allows you to like upsample
01:14:34.313 --> 01:14:35.650
back into high detail.
01:14:35.650 --> 01:14:37.000
So this produces.
01:14:37.000 --> 01:14:37.770
This is used for.
01:14:37.860 --> 01:14:39.400
Pics to pics or.
01:14:40.160 --> 01:14:42.400
Image segmentation methods where you're
01:14:42.400 --> 01:14:43.900
trying to produce some output or some
01:14:43.900 --> 01:14:45.050
value for each pixel.
01:14:47.040 --> 01:14:48.240
Just where it's being aware of what
01:14:48.240 --> 01:14:49.210
this is, but.
01:14:49.450 --> 01:14:53.530
I'm not going to go into much detail.
01:14:54.270 --> 01:14:54.530
Right.
01:14:54.530 --> 01:14:58.100
So in summary, we learned that the
01:14:58.100 --> 01:15:00.530
massive Imagenet data set, it was a key
01:15:00.530 --> 01:15:01.770
ingredient, the deep learning
01:15:01.770 --> 01:15:02.280
breakthrough.
01:15:03.220 --> 01:15:06.230
We saw how resonates used skip
01:15:06.230 --> 01:15:07.200
connections.
01:15:07.200 --> 01:15:08.800
It uses data augmentation and batch
01:15:08.800 --> 01:15:09.440
normalization.
01:15:09.440 --> 01:15:10.900
These are also commonly used in many
01:15:10.900 --> 01:15:11.860
other architectures.
01:15:12.940 --> 01:15:15.000
Really important is that you can pre
01:15:15.000 --> 01:15:17.180
train a model on a large data set, for
01:15:17.180 --> 01:15:19.220
example Imagenet and then use that pre
01:15:19.220 --> 01:15:20.420
trained model as what's called a
01:15:20.420 --> 01:15:22.220
backbone for other tasks where you
01:15:22.220 --> 01:15:24.935
either apply it as or allow it to tune
01:15:24.935 --> 01:15:26.330
a little bit in the training.
01:15:27.570 --> 01:15:30.240
And then I showed you a little bit
01:15:30.240 --> 01:15:32.570
about mask R CNN which samples the
01:15:32.570 --> 01:15:33.980
patches and the feature maps and
01:15:33.980 --> 01:15:36.350
predicts boxes, the object region and
01:15:36.350 --> 01:15:36.870
key points.
01:15:37.850 --> 01:15:39.560
And then finally the main thing to
01:15:39.560 --> 01:15:41.350
know, just be aware of the unit which
01:15:41.350 --> 01:15:42.860
is a common architecture for
01:15:42.860 --> 01:15:45.030
segmentation or image generation.
01:15:46.750 --> 01:15:49.340
So thanks very much and on Tuesday I
01:15:49.340 --> 01:15:51.380
will talk about language and word
01:15:51.380 --> 01:15:53.570
representations.
01:15:53.570 --> 01:15:54.400
Have a good weekend.
01:15:58.190 --> 01:15:58.570
Welcome.