whisper-finetuning-for-asee / CS_441_2023_Spring_February_21,_2023.vtt
classen3's picture
Imported CS 441 audio/transcripts
a67be9a verified
WEBVTT Kind: captions; Language: en-US
NOTE
Created on 2024-02-07T20:59:36.8732843Z by ClassTranscribe
00:01:39.900 --> 00:01:41.600
Alright, good morning everybody.
00:01:43.340 --> 00:01:45.450
So we're going to do another
00:01:45.450 --> 00:01:47.490
consolidation and review session.
00:01:47.870 --> 00:01:50.320
I'm going, it's going to be sort of
00:01:50.320 --> 00:01:51.790
like just a different perspective on
00:01:51.790 --> 00:01:54.145
some of the things we've seen and then
00:01:54.145 --> 00:01:56.260
and then I'll talk about the exam a
00:01:56.260 --> 00:01:57.130
little bit as well.
00:01:59.970 --> 00:02:04.120
So far we've been talking about this
00:02:04.120 --> 00:02:05.800
whole function quite a lot.
00:02:05.800 --> 00:02:09.309
That we have some data, we have some
00:02:09.310 --> 00:02:11.700
model F, we have some parameters Theta.
00:02:11.700 --> 00:02:13.580
We have something that we're trying to
00:02:13.580 --> 00:02:15.530
predict why we have some loss that
00:02:15.530 --> 00:02:17.680
defines how good our prediction is.
00:02:18.500 --> 00:02:20.720
And we're trying to solve for some
00:02:20.720 --> 00:02:23.370
parameters that minimize the loss.
00:02:24.770 --> 00:02:26.630
Given our model and our data and our
00:02:26.630 --> 00:02:29.500
parameters and our labels.
00:02:30.410 --> 00:02:33.700
And so it's all, it's pretty
00:02:33.700 --> 00:02:34.230
complicated.
00:02:34.230 --> 00:02:35.195
There's a lot there.
00:02:35.195 --> 00:02:37.700
And if I were going to reteach the
00:02:37.700 --> 00:02:39.610
class, I would probably start more
00:02:39.610 --> 00:02:42.460
simply by just talking about X.
00:02:42.460 --> 00:02:45.190
So let's just talk about X for now.
00:02:46.900 --> 00:02:48.970
So for example, when you have one bit
00:02:48.970 --> 00:02:50.900
and another bit and they like each
00:02:50.900 --> 00:02:52.860
other very much and they come together,
00:02:52.860 --> 00:02:53.710
it makes 3.
00:02:54.390 --> 00:02:55.030
I'm just kidding.
00:02:55.030 --> 00:02:56.250
That's how integers are made.
00:02:59.170 --> 00:03:03.190
So let's talk about the data for a bit.
00:03:03.190 --> 00:03:05.590
So first, like, what is data?
00:03:05.590 --> 00:03:08.140
This sounds like kind of elementary,
00:03:08.140 --> 00:03:09.890
but it's actually not a very easy
00:03:09.890 --> 00:03:11.000
question to answer, right?
00:03:11.860 --> 00:03:15.113
So if we talk about one way that we can
00:03:15.113 --> 00:03:17.041
think about it is that we can think
00:03:17.041 --> 00:03:19.510
about data is information that helps us
00:03:19.510 --> 00:03:20.740
make decisions.
00:03:22.320 --> 00:03:23.930
Another way that we can think about it
00:03:23.930 --> 00:03:25.850
is data is just numbers, right?
00:03:25.850 --> 00:03:27.457
Like if it's stored on.
00:03:27.457 --> 00:03:30.390
If you have data stored on a computer,
00:03:30.390 --> 00:03:33.050
it's just like a big sequence of bits.
00:03:33.050 --> 00:03:35.976
And that's all that's really all data
00:03:35.976 --> 00:03:36.159
is.
00:03:36.159 --> 00:03:37.590
It's just a bunch of numbers.
00:03:40.250 --> 00:03:43.495
So for people, if we think about how do
00:03:43.495 --> 00:03:46.030
we represent data, we store it in terms
00:03:46.030 --> 00:03:49.200
of media that we can see, read or hear.
00:03:49.200 --> 00:03:51.190
So we might have images.
00:03:51.820 --> 00:03:54.513
We might have like text documents, we
00:03:54.513 --> 00:03:57.240
might have audio files, we could have
00:03:57.240 --> 00:03:58.450
plots and tables.
00:03:58.450 --> 00:04:00.090
So there are things that we perceive
00:04:00.090 --> 00:04:01.920
and then we make sense of it based on
00:04:01.920 --> 00:04:02.810
our perception.
00:04:04.900 --> 00:04:05.989
And we can.
00:04:05.989 --> 00:04:07.980
The data can take different forms
00:04:07.980 --> 00:04:09.450
without really changing its meaning.
00:04:09.450 --> 00:04:11.900
So we can resize an image, we can
00:04:11.900 --> 00:04:16.045
refreeze a paragraph, we can speed up
00:04:16.045 --> 00:04:18.770
an audio book, and all of that changes
00:04:18.770 --> 00:04:20.860
the form of the data a bit, but it
00:04:20.860 --> 00:04:22.809
doesn't really change much of the
00:04:22.810 --> 00:04:26.470
information that data contained.
00:04:29.200 --> 00:04:31.890
And sometimes we can change the data so
00:04:31.890 --> 00:04:33.750
that it becomes more informative to us.
00:04:33.750 --> 00:04:36.940
So we can denoise an image, we can
00:04:36.940 --> 00:04:37.590
clean it up.
00:04:37.590 --> 00:04:39.825
We can try to identify the key points
00:04:39.825 --> 00:04:42.460
and insights in a document.
00:04:42.460 --> 00:04:43.186
Cliff notes.
00:04:43.186 --> 00:04:45.280
We can remove background noise from
00:04:45.280 --> 00:04:45.900
audio.
00:04:47.030 --> 00:04:50.945
And none of these operations really add
00:04:50.945 --> 00:04:52.040
information to the data.
00:04:52.040 --> 00:04:53.530
If anything, they take away
00:04:53.530 --> 00:04:55.230
information, they prune it.
00:04:56.170 --> 00:04:58.890
But they reorganize it, and they
00:04:58.890 --> 00:05:01.390
removed distracting information so that
00:05:01.390 --> 00:05:03.276
it's easier for us to extract
00:05:03.276 --> 00:05:05.550
information that we want from that
00:05:05.550 --> 00:05:05.970
data.
00:05:08.040 --> 00:05:09.570
So that's from the, that's from our
00:05:09.570 --> 00:05:10.790
perspective as people.
00:05:11.930 --> 00:05:15.510
For computers, data are just numbers,
00:05:15.510 --> 00:05:17.060
so the numbers don't really mean
00:05:17.060 --> 00:05:18.730
anything by themselves.
00:05:18.730 --> 00:05:20.090
They're just bits, right?
00:05:20.780 --> 00:05:22.505
The meaning comes from the way the
00:05:22.505 --> 00:05:24.169
numbers were produced and how they can
00:05:24.170 --> 00:05:25.620
inform what they can tell us about
00:05:25.620 --> 00:05:26.930
other numbers, essentially.
00:05:28.090 --> 00:05:28.820
So.
00:05:29.490 --> 00:05:32.400
There you could have like each.
00:05:32.400 --> 00:05:34.490
Each number could be informative on its
00:05:34.490 --> 00:05:37.160
own, or it could only be informative if
00:05:37.160 --> 00:05:39.175
you view it in patterns of other groups
00:05:39.175 --> 00:05:39.860
of numbers.
00:05:41.530 --> 00:05:43.410
So one bit.
00:05:43.410 --> 00:05:44.975
If you have a whole bit string, the
00:05:44.975 --> 00:05:46.320
bits individually may not mean
00:05:46.320 --> 00:05:48.624
anything, but those bits may form
00:05:48.624 --> 00:05:50.208
characters, and those characters may
00:05:50.208 --> 00:05:52.410
form words, and those words may tell us
00:05:52.410 --> 00:05:53.420
something useful.
00:05:55.980 --> 00:05:59.479
So just like just like just like we can
00:05:59.480 --> 00:06:02.160
resize images and speed up audio and
00:06:02.160 --> 00:06:04.494
things like that to change the form of
00:06:04.494 --> 00:06:06.114
the data without changing the
00:06:06.114 --> 00:06:08.390
information and the data, we can also
00:06:08.390 --> 00:06:11.090
transform data without changing its
00:06:11.090 --> 00:06:13.250
information and computer programs.
00:06:13.830 --> 00:06:16.290
So, for example, we can add or multiply
00:06:16.290 --> 00:06:19.607
a vector by a constant value, and as
00:06:19.607 --> 00:06:21.220
long as we do that consistently, it
00:06:21.220 --> 00:06:22.740
doesn't really change the information
00:06:22.740 --> 00:06:24.160
that's contained in that data.
00:06:24.160 --> 00:06:27.230
So there's nothing inherently different
00:06:27.230 --> 00:06:29.136
about, for example, if I represent a
00:06:29.136 --> 00:06:31.000
vector or I represent the negative
00:06:31.000 --> 00:06:33.140
vector, as long as I'm consistent.
00:06:34.810 --> 00:06:36.726
We can represent the data in different
00:06:36.726 --> 00:06:39.130
ways, As for example 16 or 32 bit
00:06:39.130 --> 00:06:40.360
floats or integers.
00:06:40.360 --> 00:06:41.750
We might lose a little bit, but not
00:06:41.750 --> 00:06:42.503
very much.
00:06:42.503 --> 00:06:45.000
We can compress the document or store
00:06:45.000 --> 00:06:47.070
it in a different file format, so
00:06:47.070 --> 00:06:48.400
there's lots of different ways to
00:06:48.400 --> 00:06:50.630
represent the same data without
00:06:50.630 --> 00:06:52.060
changing the information.
00:06:52.680 --> 00:06:54.590
That is stored in that data or that's
00:06:54.590 --> 00:06:55.820
represented by that data?
00:06:57.980 --> 00:07:00.860
And justice like sometimes we can
00:07:00.860 --> 00:07:02.984
create summaries or ways to make data
00:07:02.984 --> 00:07:04.490
more informative for people.
00:07:04.490 --> 00:07:06.597
We can also sometimes transform the
00:07:06.597 --> 00:07:08.639
data to make it more informative for
00:07:08.640 --> 00:07:09.230
computers.
00:07:10.070 --> 00:07:12.400
So we can center and rescale the images
00:07:12.400 --> 00:07:13.890
of digits so that they're easier to
00:07:13.890 --> 00:07:15.850
compare each other to each other.
00:07:15.850 --> 00:07:18.240
For example, we can normalize the data,
00:07:18.240 --> 00:07:19.910
for example, subtract the means and
00:07:19.910 --> 00:07:21.944
divide by a stern deviations of the
00:07:21.944 --> 00:07:24.023
features of like cancer cell
00:07:24.023 --> 00:07:26.025
measurements that make similarity
00:07:26.025 --> 00:07:28.740
measurements better reflect malignancy.
00:07:28.740 --> 00:07:31.430
And we can do feature selection or
00:07:31.430 --> 00:07:33.780
create new features out of combinations
00:07:33.780 --> 00:07:34.590
of inputs.
00:07:34.590 --> 00:07:38.330
So this is kind of like analogous to
00:07:38.330 --> 00:07:40.230
creating a summary of a document.
00:07:40.280 --> 00:07:42.370
Or denoising the image so that we can
00:07:42.370 --> 00:07:44.580
see it better, or enhancing or things
00:07:44.580 --> 00:07:45.290
like that, right?
00:07:46.210 --> 00:07:48.880
Makes it easier to extract information
00:07:48.880 --> 00:07:50.240
from the same data.
00:07:53.320 --> 00:07:55.000
And sometimes they also change the
00:07:55.000 --> 00:07:57.370
structure of the data to make it easier
00:07:57.370 --> 00:07:58.250
to process.
00:07:58.960 --> 00:08:01.610
So we might naturally think of the
00:08:01.610 --> 00:08:05.505
image as a matrix because we where each
00:08:05.505 --> 00:08:09.260
of these grid cells represent some
00:08:09.260 --> 00:08:12.356
intensity at some position in the
00:08:12.356 --> 00:08:12.619
image.
00:08:13.430 --> 00:08:16.350
And this feels natural because the
00:08:16.350 --> 00:08:18.640
image is like takes up some area.
00:08:18.640 --> 00:08:20.219
It's like it makes sense to think of it
00:08:20.220 --> 00:08:22.530
in terms of rows and columns, but we
00:08:22.530 --> 00:08:24.780
can equivalently represent it as a
00:08:24.780 --> 00:08:27.340
vector, which is what we did for the
00:08:27.340 --> 00:08:28.580
homework and what we often do.
00:08:29.300 --> 00:08:32.610
And you just reshape it, and this is
00:08:32.610 --> 00:08:33.510
more convenient.
00:08:33.510 --> 00:08:35.432
So the matrix form is more convenient
00:08:35.432 --> 00:08:37.900
for local pattern analysis if we're
00:08:37.900 --> 00:08:39.550
trying to look for edges and things
00:08:39.550 --> 00:08:40.370
like that.
00:08:40.370 --> 00:08:42.416
The vector form is more convenient if
00:08:42.416 --> 00:08:44.360
we're trying to apply a linear model to
00:08:44.360 --> 00:08:46.552
it, because we can just do that as a as
00:08:46.552 --> 00:08:48.090
a dot product operation.
00:08:50.040 --> 00:08:52.010
So either way, it doesn't change the
00:08:52.010 --> 00:08:53.510
information and the data.
00:08:53.510 --> 00:08:56.310
But this form like makes no sense to us
00:08:56.310 --> 00:08:57.110
as people.
00:08:57.110 --> 00:08:59.790
But for computers it's more convenient
00:08:59.790 --> 00:09:01.360
to do certain kinds of operations if
00:09:01.360 --> 00:09:03.020
you represent it as a vector versus a
00:09:03.020 --> 00:09:03.540
matrix.
00:09:06.210 --> 00:09:08.270
So let's talk about how some different
00:09:08.270 --> 00:09:10.120
forms of information are represented.
00:09:10.120 --> 00:09:13.390
So as I mentioned a little bit in the
00:09:13.390 --> 00:09:16.580
last class, we can represent images as
00:09:16.580 --> 00:09:17.920
3D matrices.
00:09:18.660 --> 00:09:20.930
Where the three dimensions are the row,
00:09:20.930 --> 00:09:22.190
the column and the color.
00:09:22.820 --> 00:09:24.565
So if we have some intensity pattern
00:09:24.565 --> 00:09:29.290
like this, then the bright values are
00:09:29.290 --> 00:09:31.640
typically one or 255 depending on your
00:09:31.640 --> 00:09:32.410
representation.
00:09:33.090 --> 00:09:35.580
The dark values will be very low, like
00:09:35.580 --> 00:09:37.990
0 or in this case the darkest values
00:09:37.990 --> 00:09:39.210
are only about .3.
00:09:40.190 --> 00:09:43.140
And you represent that for the entire
00:09:43.140 --> 00:09:45.140
image area, and that gives you.
00:09:45.140 --> 00:09:47.880
If you're representing a grayscale
00:09:47.880 --> 00:09:49.410
image, you would just have one color
00:09:49.410 --> 00:09:51.130
dimension, so you'd have a number of
00:09:51.130 --> 00:09:52.808
rows by number of columns by one.
00:09:52.808 --> 00:09:55.425
If you have an RGB image, then you
00:09:55.425 --> 00:09:57.933
would have one matrix for each of the
00:09:57.933 --> 00:09:59.040
color dimensions.
00:09:59.040 --> 00:10:01.912
So you'd have a 2D matrix for R2D
00:10:01.912 --> 00:10:04.260
matrix for G and a 2D matrix for B.
00:10:08.340 --> 00:10:12.180
Text can be represented as a sequence
00:10:12.180 --> 00:10:13.090
of integers.
00:10:14.020 --> 00:10:15.890
And it's actually, I'm going to talk
00:10:15.890 --> 00:10:18.010
we'll learn a lot more about word
00:10:18.010 --> 00:10:21.210
representations next week and how to
00:10:21.210 --> 00:10:24.025
process language, but it's actually a
00:10:24.025 --> 00:10:25.976
more subtle problem than you might
00:10:25.976 --> 00:10:27.010
think at first.
00:10:27.010 --> 00:10:29.745
So you might think well represent each
00:10:29.745 --> 00:10:31.240
word as an integer.
00:10:31.240 --> 00:10:34.420
But then that becomes kind of tricky
00:10:34.420 --> 00:10:35.880
because you can have lots of similar
00:10:35.880 --> 00:10:39.075
words swims and swim and swim, and
00:10:39.075 --> 00:10:40.752
those will all be different integers.
00:10:40.752 --> 00:10:42.930
And those integers are kind of like
00:10:42.930 --> 00:10:43.990
arbitrary tokens.
00:10:44.100 --> 00:10:45.940
Don't necessarily have any similarity
00:10:45.940 --> 00:10:46.800
to each other.
00:10:48.530 --> 00:10:50.170
And then if you try to represent things
00:10:50.170 --> 00:10:51.680
as integers, and then you run into
00:10:51.680 --> 00:10:53.525
names and lots of different varieties
00:10:53.525 --> 00:10:55.020
of ways that we put characters
00:10:55.020 --> 00:10:56.250
together, then you have difficulty
00:10:56.250 --> 00:10:57.140
representing all of those.
00:10:57.140 --> 00:10:58.459
You need an awful lot of integers.
00:10:59.860 --> 00:11:01.665
So you can go to another extreme and
00:11:01.665 --> 00:11:03.016
represent the characters as.
00:11:03.016 --> 00:11:05.042
You can just represent the characters
00:11:05.042 --> 00:11:05.984
as byte values.
00:11:05.984 --> 00:11:09.430
So you can represent dog eat as like
00:11:09.430 --> 00:11:13.490
four 15727 using 27 as space 125.
00:11:13.490 --> 00:11:16.164
So you could just represent the
00:11:16.164 --> 00:11:18.920
characters as a bite stream and process
00:11:18.920 --> 00:11:19.550
it that way.
00:11:19.550 --> 00:11:21.335
That's one extreme.
00:11:21.335 --> 00:11:23.579
The other extreme is that you represent
00:11:23.580 --> 00:11:26.170
each complete word as an integer value
00:11:26.170 --> 00:11:28.630
and so you pre assign you have some.
00:11:28.690 --> 00:11:30.480
Vocabulary where you have like all the
00:11:30.480 --> 00:11:31.490
words that you think you might
00:11:31.490 --> 00:11:32.146
encounter.
00:11:32.146 --> 00:11:34.974
You assign each word to some integer,
00:11:34.974 --> 00:11:36.820
and then you have an integer sequence
00:11:36.820 --> 00:11:38.270
that you're going to process.
00:11:38.270 --> 00:11:41.660
And if you see some new set of
00:11:41.660 --> 00:11:43.953
characters that is not any rockabilly,
00:11:43.953 --> 00:11:46.490
you assign it to an unknown token, a
00:11:46.490 --> 00:11:49.930
token called Unknown or UNK typically.
00:11:51.080 --> 00:11:54.410
And then there's also like intermediate
00:11:54.410 --> 00:11:55.920
things, which I'll talk about more when
00:11:55.920 --> 00:11:57.419
I talk about language, where you can
00:11:57.420 --> 00:12:00.560
group common groups of letters into
00:12:00.560 --> 00:12:02.589
their own little groups and represent
00:12:02.590 --> 00:12:03.530
each of those.
00:12:03.530 --> 00:12:05.270
So you can represent, for example,
00:12:05.270 --> 00:12:10.020
bedroom 1521 as bed, one token for bed,
00:12:10.020 --> 00:12:12.279
or one integer for bed, one integer for
00:12:12.280 --> 00:12:15.446
room, and then four more integers for
00:12:15.446 --> 00:12:16.013
1521.
00:12:16.013 --> 00:12:18.090
And with this kind of representation
00:12:18.090 --> 00:12:19.960
you can model any kind of like
00:12:19.960 --> 00:12:20.370
sequence.
00:12:20.420 --> 00:12:22.610
The characters just really weird
00:12:22.610 --> 00:12:24.590
sequences like random letters will take
00:12:24.590 --> 00:12:26.590
a lot of different integers to
00:12:26.590 --> 00:12:29.870
represent, while something, well,
00:12:29.870 --> 00:12:32.310
common words will only take one integer
00:12:32.310 --> 00:12:32.650
each.
00:12:37.160 --> 00:12:39.623
And then we also may want to represent
00:12:39.623 --> 00:12:40.039
audio.
00:12:40.040 --> 00:12:43.270
So audio we can represent in different
00:12:43.270 --> 00:12:45.839
ways, we can represent it as amplitude
00:12:45.839 --> 00:12:46.606
versus time.
00:12:46.606 --> 00:12:48.870
The wave form, and this is usually the
00:12:48.870 --> 00:12:50.590
way that it's stored is just you have
00:12:50.590 --> 00:12:54.930
an amplitude at some high frequency or
00:12:54.930 --> 00:12:58.530
you can represent it as a spectrogram
00:12:58.530 --> 00:13:00.660
as like a frequency, amplitude versus
00:13:00.660 --> 00:13:02.970
time like what's the power and the.
00:13:03.060 --> 00:13:04.900
And the low notes versus the high notes
00:13:04.900 --> 00:13:06.530
at each time step.
00:13:10.280 --> 00:13:11.720
And then there's lots of other kinds of
00:13:11.720 --> 00:13:12.030
data.
00:13:12.030 --> 00:13:14.610
So we can represent measurements and
00:13:14.610 --> 00:13:16.420
continuous values as floating point
00:13:16.420 --> 00:13:18.760
numbers, temperature length, area,
00:13:18.760 --> 00:13:21.970
dollars, categorical values like color,
00:13:21.970 --> 00:13:24.930
like whether something's happy or sad
00:13:24.930 --> 00:13:27.430
or big or small, those can be
00:13:27.430 --> 00:13:29.537
represented as integers.
00:13:29.537 --> 00:13:32.450
And here the distinction is that when
00:13:32.450 --> 00:13:34.052
you're representing categorical values
00:13:34.052 --> 00:13:36.210
as integers, these integers.
00:13:36.840 --> 00:13:38.620
The distance between integers doesn't
00:13:38.620 --> 00:13:40.920
imply similarity usually, so you don't
00:13:40.920 --> 00:13:42.790
necessarily say that zero is more
00:13:42.790 --> 00:13:45.150
similar to one than it is to two when
00:13:45.150 --> 00:13:46.910
you're representing categorical values.
00:13:47.960 --> 00:13:49.530
But if you're representing continuous
00:13:49.530 --> 00:13:51.110
values, then you see that some
00:13:51.110 --> 00:13:52.820
Euclidean distance between those values
00:13:52.820 --> 00:13:53.710
is meaningful.
00:13:55.920 --> 00:13:57.820
And all of these different types of
00:13:57.820 --> 00:14:00.120
values, the text, the images and the
00:14:00.120 --> 00:14:02.560
measurements can be reshaped and
00:14:02.560 --> 00:14:04.670
concatenated into a long feature
00:14:04.670 --> 00:14:05.050
vector.
00:14:05.050 --> 00:14:06.610
And that's often what we do.
00:14:06.610 --> 00:14:09.240
We take everything, every kind of
00:14:09.240 --> 00:14:11.300
information that we think can be
00:14:11.300 --> 00:14:13.900
applicable to solve some problem or
00:14:13.900 --> 00:14:15.500
predict some why that we're interested
00:14:15.500 --> 00:14:15.730
in.
00:14:16.440 --> 00:14:20.190
At some point we take that information,
00:14:20.190 --> 00:14:22.640
we reshape it into a big vector, and
00:14:22.640 --> 00:14:24.970
then we do a prediction based on that
00:14:24.970 --> 00:14:25.410
vector.
00:14:33.060 --> 00:14:34.640
Weird screeching sound.
00:14:35.270 --> 00:14:35.840
00:14:37.010 --> 00:14:39.780
So this is the same information.
00:14:39.780 --> 00:14:41.440
Content can be represented in many
00:14:41.440 --> 00:14:41.910
ways.
00:14:43.150 --> 00:14:45.930
Essentially, if the original numbers
00:14:45.930 --> 00:14:47.502
can be recovered, then it means that
00:14:47.502 --> 00:14:49.475
the change in representation doesn't
00:14:49.475 --> 00:14:50.980
change the information content.
00:14:50.980 --> 00:14:52.729
So any kind of transformation that we
00:14:52.730 --> 00:14:54.419
apply that we can invert, that we can
00:14:54.420 --> 00:14:56.350
get back to the original is not
00:14:56.350 --> 00:14:57.720
changing the information, it's just
00:14:57.720 --> 00:14:59.510
reshaping the data in some way that
00:14:59.510 --> 00:15:01.550
might make it easier or maybe harder to
00:15:01.550 --> 00:15:02.210
process.
00:15:03.570 --> 00:15:05.795
And we can store all types of data as
00:15:05.795 --> 00:15:07.100
1D vectors and arrays.
00:15:07.850 --> 00:15:10.800
And so we'll typically have like as our
00:15:10.800 --> 00:15:15.480
data set will have some set of vectors,
00:15:15.480 --> 00:15:17.630
a matrix where the columns are
00:15:17.630 --> 00:15:20.320
individual data samples and the rows
00:15:20.320 --> 00:15:22.570
correspond to different features as
00:15:22.570 --> 00:15:24.340
representing a set of data.
00:15:25.500 --> 00:15:27.820
And you don't, really.
00:15:27.820 --> 00:15:30.060
You never really need to use matrices
00:15:30.060 --> 00:15:31.680
or other data structures, but they just
00:15:31.680 --> 00:15:33.690
make it easier for us to code, and so
00:15:33.690 --> 00:15:34.170
it doesn't.
00:15:34.170 --> 00:15:36.080
Again, like there's nothing inherent
00:15:36.080 --> 00:15:37.740
about those structures that adds
00:15:37.740 --> 00:15:39.800
information to the data, it's just for
00:15:39.800 --> 00:15:40.570
convenience.
00:15:42.980 --> 00:15:45.000
So all of that so far is kind of
00:15:45.000 --> 00:15:49.019
describing a data .1 piece of data that
00:15:49.020 --> 00:15:51.460
we might use to make a prediction to
00:15:51.460 --> 00:15:52.980
gather some information from.
00:15:53.750 --> 00:15:55.660
But in machine learning, we're usually
00:15:55.660 --> 00:15:56.035
dealing.
00:15:56.035 --> 00:15:58.385
We're often dealing with data sets, so
00:15:58.385 --> 00:16:01.340
we want to learn from some set of data
00:16:01.340 --> 00:16:03.676
so that when we get some new data
00:16:03.676 --> 00:16:05.540
point, we can make some useful
00:16:05.540 --> 00:16:07.060
prediction from that data point.
00:16:08.850 --> 00:16:12.144
So we can write this as that we have
00:16:12.144 --> 00:16:14.436
some where X is a set of data.
00:16:14.436 --> 00:16:17.607
The little X here, or actually I have X
00:16:17.607 --> 00:16:18.670
is not a set of data, sorry.
00:16:18.670 --> 00:16:21.190
The little X is a data point with M
00:16:21.190 --> 00:16:24.120
features, so it has some M scalar
00:16:24.120 --> 00:16:27.304
values and it's drawn from some
00:16:27.304 --> 00:16:29.720
distribution D so for example, your
00:16:29.720 --> 00:16:32.114
distribution D could be all the images
00:16:32.114 --> 00:16:34.650
that are on the Internet and you're
00:16:34.650 --> 00:16:36.207
just like downloading random images
00:16:36.207 --> 00:16:37.070
from the Internet.
00:16:37.120 --> 00:16:38.820
And then one of those random images is
00:16:38.820 --> 00:16:39.820
a little X.
00:16:41.330 --> 00:16:43.650
We can sample many of these X's so we
00:16:43.650 --> 00:16:45.040
could download different documents from
00:16:45.040 --> 00:16:45.442
the Internet.
00:16:45.442 --> 00:16:47.170
We could download like emails to
00:16:47.170 --> 00:16:49.000
classify spam or not spam.
00:16:49.000 --> 00:16:51.769
We could take pictures, we could take
00:16:51.770 --> 00:16:54.830
measurements, and then we get a
00:16:54.830 --> 00:16:57.180
collection of those data points and
00:16:57.180 --> 00:16:59.830
that gives us some big X.
00:16:59.830 --> 00:17:03.610
It's a set of these X little X vectors
00:17:03.610 --> 00:17:06.890
from one to N, from zero to N guess it
00:17:06.890 --> 00:17:08.290
should be 0 to minus one.
00:17:09.170 --> 00:17:11.830
And that's John.
00:17:11.830 --> 00:17:13.790
It's all drawn from some distribution D
00:17:13.790 --> 00:17:15.260
so there's always some implicit
00:17:15.260 --> 00:17:16.865
distribution even if we don't know what
00:17:16.865 --> 00:17:19.190
it is, some source of the data that
00:17:19.190 --> 00:17:19.950
we're sampling.
00:17:19.950 --> 00:17:21.936
And typically we assume that we don't
00:17:21.936 --> 00:17:23.332
have all the data, we just have like
00:17:23.332 --> 00:17:25.020
some of it, we have some representative
00:17:25.020 --> 00:17:26.200
sample of that data.
00:17:27.380 --> 00:17:28.940
So we can repeat the collection many
00:17:28.940 --> 00:17:30.980
times, or we can collect one big data
00:17:30.980 --> 00:17:33.670
set and split it, and then we'll often
00:17:33.670 --> 00:17:36.173
split it into some X train, which are
00:17:36.173 --> 00:17:37.950
the samples that we're going to learn
00:17:37.950 --> 00:17:40.935
from an ex test, which are the samples
00:17:40.935 --> 00:17:42.820
that we're going to use to see how we
00:17:42.820 --> 00:17:43.350
learned.
00:17:44.950 --> 00:17:47.210
And usually we assume that all the data
00:17:47.210 --> 00:17:49.518
samples within X train and X test come
00:17:49.518 --> 00:17:51.240
from the same distribution and are
00:17:51.240 --> 00:17:52.505
independent of each other.
00:17:52.505 --> 00:17:54.620
So that term is called IID or
00:17:54.620 --> 00:17:56.470
independent identically distributed.
00:17:56.470 --> 00:17:59.760
And essentially that just means that no
00:17:59.760 --> 00:18:01.510
data point tells you anything about
00:18:01.510 --> 00:18:03.590
another data point if you the sampling
00:18:03.590 --> 00:18:04.027
distribution.
00:18:04.027 --> 00:18:06.654
So they come from the same
00:18:06.654 --> 00:18:07.092
distribution.
00:18:07.092 --> 00:18:09.865
So maybe they have they may have
00:18:09.865 --> 00:18:12.077
similar values to each other, but if
00:18:12.077 --> 00:18:13.466
know that distribution then they're
00:18:13.466 --> 00:18:14.299
then they're independent.
00:18:14.360 --> 00:18:16.050
If you randomly download images from
00:18:16.050 --> 00:18:16.660
the Internet.
00:18:17.410 --> 00:18:19.105
Each image tells you something about
00:18:19.105 --> 00:18:20.460
images, but they don't really tell you
00:18:20.460 --> 00:18:22.336
directly anything about the other
00:18:22.336 --> 00:18:24.149
images about a specific other image.
00:18:27.230 --> 00:18:29.540
So let's look at an example from this
00:18:29.540 --> 00:18:33.550
Penguins data set that we use in the
00:18:33.550 --> 00:18:34.000
homework.
00:18:34.820 --> 00:18:36.640
And I'm not actually going to analyze
00:18:36.640 --> 00:18:38.120
it in a way that directly helps you
00:18:38.120 --> 00:18:38.720
with your homework.
00:18:38.720 --> 00:18:40.220
It's just an example that you may be
00:18:40.220 --> 00:18:40.760
familiar with.
00:18:41.830 --> 00:18:43.010
But let's look at this.
00:18:43.010 --> 00:18:44.670
So we have this.
00:18:44.670 --> 00:18:46.970
It's represented in this like Panda
00:18:46.970 --> 00:18:49.370
framework, but basically just a tabular
00:18:49.370 --> 00:18:49.820
framework.
00:18:50.490 --> 00:18:53.020
So we have a whole bunch of data points
00:18:53.020 --> 00:18:55.360
where we know the species, the island,
00:18:55.360 --> 00:18:56.600
the.
00:18:57.400 --> 00:18:58.810
I don't even know what a Coleman is.
00:18:58.810 --> 00:18:59.950
Maybe the beak or something.
00:19:01.270 --> 00:19:03.130
Cullman length and depth probably not
00:19:03.130 --> 00:19:05.290
to be, I don't know, flipper length,
00:19:05.290 --> 00:19:07.700
body mass and the sets of the Penguin
00:19:07.700 --> 00:19:08.700
which may be unknown.
00:19:10.120 --> 00:19:11.920
And so the first thing we do, which is
00:19:11.920 --> 00:19:14.158
in the starter code, is we try to
00:19:14.158 --> 00:19:17.830
process the process the data into a
00:19:17.830 --> 00:19:19.830
format that is more convenient for
00:19:19.830 --> 00:19:20.510
machine learning.
00:19:21.570 --> 00:19:24.270
And so for example like the.
00:19:25.220 --> 00:19:29.770
The SK learn learn methods for training
00:19:29.770 --> 00:19:32.790
trees does not deal with like multi
00:19:32.790 --> 00:19:34.450
valued categorical variables.
00:19:34.450 --> 00:19:35.850
So it can't deal with that.
00:19:35.850 --> 00:19:37.325
There are like 3 different islands.
00:19:37.325 --> 00:19:39.065
It means you to turn it into binary
00:19:39.065 --> 00:19:39.540
variables.
00:19:40.340 --> 00:19:42.430
And so the first thing that you often
00:19:42.430 --> 00:19:44.340
do when you're trying to analyze a
00:19:44.340 --> 00:19:48.020
problem is you, like, reformat the data
00:19:48.020 --> 00:19:51.250
in a way that allows you to process the
00:19:51.250 --> 00:19:53.370
data or learn from the data more
00:19:53.370 --> 00:19:54.130
conveniently.
00:19:54.980 --> 00:19:58.900
So in this code we read the CSV that
00:19:58.900 --> 00:20:02.280
gives us some tabular format for the
00:20:02.280 --> 00:20:03.190
Penguin data.
00:20:04.230 --> 00:20:08.290
And then I just form this into an array
00:20:08.290 --> 00:20:10.490
so I get extracted features.
00:20:10.490 --> 00:20:12.160
These are all the different columns of
00:20:12.160 --> 00:20:13.253
that Penguin data.
00:20:13.253 --> 00:20:15.072
I put it in a Numpy array.
00:20:15.072 --> 00:20:18.435
I get the species because that's what
00:20:18.435 --> 00:20:20.100
the problem was to predict.
00:20:20.100 --> 00:20:22.389
And then I get the unique values of the
00:20:22.390 --> 00:20:23.300
island.
00:20:23.300 --> 00:20:26.840
I get the unique values of the sex
00:20:26.840 --> 00:20:28.880
which will be male, female and unknown.
00:20:28.880 --> 00:20:32.760
And I initialize some array where I'm
00:20:32.760 --> 00:20:34.000
going to store my data.
00:20:34.430 --> 00:20:36.722
Then I loop through all the elements or
00:20:36.722 --> 00:20:38.250
all the data points, and I know that
00:20:38.250 --> 00:20:39.830
there's one data point for each Y
00:20:39.830 --> 00:20:41.440
value, so I looked through the length
00:20:41.440 --> 00:20:41.800
of Y.
00:20:42.950 --> 00:20:44.770
And then I just replace the island
00:20:44.770 --> 00:20:46.890
names with an indicator variable with
00:20:46.890 --> 00:20:48.960
three indicator variables so I forget
00:20:48.960 --> 00:20:49.353
what the.
00:20:49.353 --> 00:20:50.720
I guess they're down here so if the
00:20:50.720 --> 00:20:51.930
island is Biscoe.
00:20:52.690 --> 00:20:54.830
Then the first value will be zero, I
00:20:54.830 --> 00:20:55.560
mean will be one.
00:20:56.460 --> 00:20:58.292
F and otherwise it will be 0.
00:20:58.292 --> 00:21:00.690
If the island is dream then the second
00:21:00.690 --> 00:21:02.850
value will be one and otherwise it will
00:21:02.850 --> 00:21:03.390
be 0.
00:21:03.390 --> 00:21:06.620
And if the island is Torgerson then the
00:21:06.620 --> 00:21:09.028
third value will be one and otherwise
00:21:09.028 --> 00:21:10.460
it will be 0.
00:21:10.460 --> 00:21:12.120
So exactly one of these should be equal
00:21:12.120 --> 00:21:13.646
to 1 and the other should be equal to
00:21:13.646 --> 00:21:13.820
0.
00:21:14.710 --> 00:21:16.154
Then I fell in the floating point
00:21:16.154 --> 00:21:17.980
values for these other things and then
00:21:17.980 --> 00:21:19.830
I do the same for this X.
00:21:19.830 --> 00:21:22.420
So one of these three values, female,
00:21:22.420 --> 00:21:24.892
male or unknown will be a one and the
00:21:24.892 --> 00:21:26.160
other two will be a 0.
00:21:26.950 --> 00:21:28.590
And so at the end of this I have this
00:21:28.590 --> 00:21:32.650
like now this data vector where each
00:21:32.650 --> 00:21:33.380
column.
00:21:34.050 --> 00:21:36.340
Will be either like a binary number or
00:21:36.340 --> 00:21:39.103
a floating point number that tells me
00:21:39.103 --> 00:21:42.360
like what island or what sex and what
00:21:42.360 --> 00:21:46.870
the Penguin had and then the I'll have
00:21:46.870 --> 00:21:50.620
a row for each data sample and for Y
00:21:50.620 --> 00:21:52.440
I'll just have her vote for each data
00:21:52.440 --> 00:21:55.360
sample that has the name of the thing
00:21:55.360 --> 00:21:56.920
I'm trying to predict, the species.
00:22:01.580 --> 00:22:04.390
So if we have some data set like that,
00:22:04.390 --> 00:22:06.040
then how do we measure it?
00:22:06.040 --> 00:22:09.156
So there's some simple things we can
00:22:09.156 --> 00:22:09.468
do.
00:22:09.468 --> 00:22:11.235
One is we can just measure the shape so
00:22:11.235 --> 00:22:15.520
we can see this has 341 data samples
00:22:15.520 --> 00:22:17.070
and I've got 10 features.
00:22:18.070 --> 00:22:20.730
I can also start to think about it now
00:22:20.730 --> 00:22:21.710
as the distribution.
00:22:21.710 --> 00:22:23.520
So it's no longer just like an
00:22:23.520 --> 00:22:25.500
individual point or an individual set
00:22:25.500 --> 00:22:27.940
of values, but it's a distribution.
00:22:27.940 --> 00:22:29.377
There's some probability that I'll
00:22:29.377 --> 00:22:31.674
observe some sets of values, and some
00:22:31.674 --> 00:22:33.520
probability that I'll observe other
00:22:33.520 --> 00:22:34.309
sets of values.
00:22:35.020 --> 00:22:37.100
And so one really simple way that I can
00:22:37.100 --> 00:22:39.460
measure the distribution is by looking
00:22:39.460 --> 00:22:41.213
at the mean and the standard deviation.
00:22:41.213 --> 00:22:43.950
If it were a Gaussian distribution
00:22:43.950 --> 00:22:46.015
where the values are independent from
00:22:46.015 --> 00:22:47.665
each other and different if the
00:22:47.665 --> 00:22:49.071
different features are independent from
00:22:49.071 --> 00:22:50.860
each other in a Gaussian, this would
00:22:50.860 --> 00:22:52.300
tell me everything there is to know
00:22:52.300 --> 00:22:53.780
about the distribution.
00:22:53.780 --> 00:22:55.996
But in practice you rarely have a
00:22:55.996 --> 00:22:56.339
Gaussian.
00:22:56.340 --> 00:22:58.210
Usually it's a bit more complicated.
00:22:58.210 --> 00:22:59.206
Still, it's a useful thing.
00:22:59.206 --> 00:23:02.680
So it tells me that like the body mass
00:23:02.680 --> 00:23:05.630
average is 4200 grams.
00:23:05.950 --> 00:23:08.185
And the steering deviation is 800, so
00:23:08.185 --> 00:23:10.890
there's so the average is like 4.1
00:23:10.890 --> 00:23:12.720
kilograms, but there's like a
00:23:12.720 --> 00:23:14.110
significant variance there.
00:23:18.640 --> 00:23:23.121
One of the key things to know is that
00:23:23.121 --> 00:23:25.580
the is that I'm just getting an
00:23:25.580 --> 00:23:27.270
empirical estimate of this
00:23:27.270 --> 00:23:29.855
distribution, so I don't know what the
00:23:29.855 --> 00:23:30.686
true mean is.
00:23:30.686 --> 00:23:32.625
I don't know what the true standard
00:23:32.625 --> 00:23:33.179
deviation is.
00:23:33.180 --> 00:23:34.970
All I know is what the mean and the
00:23:34.970 --> 00:23:37.240
standard deviation is of my sample, and
00:23:37.240 --> 00:23:39.240
if I were to draw different samples, I
00:23:39.240 --> 00:23:41.530
would get different estimates of the
00:23:41.530 --> 00:23:42.780
mean and the standard deviation.
00:23:43.750 --> 00:23:46.770
So in the top row, I'm resampling this
00:23:46.770 --> 00:23:49.640
data using this convenient sample
00:23:49.640 --> 00:23:52.720
function that the PANDA framework has,
00:23:52.720 --> 00:23:54.693
and then taking the mean each time.
00:23:54.693 --> 00:23:57.310
So you can see that one time 45% of the
00:23:57.310 --> 00:23:59.480
Penguins come from Cisco, another time
00:23:59.480 --> 00:24:02.770
it's 54%, and another time it's 44%.
00:24:02.770 --> 00:24:05.330
So this is drawing 100 samples with
00:24:05.330 --> 00:24:06.070
replacement.
00:24:06.990 --> 00:24:10.570
And by the way, is like is like
00:24:10.570 --> 00:24:11.220
bootstrapping.
00:24:11.220 --> 00:24:12.795
If I want to say what's the variance of
00:24:12.795 --> 00:24:13.232
my estimate?
00:24:13.232 --> 00:24:16.240
If I had 100 samples of data, I could
00:24:16.240 --> 00:24:18.920
repeat this random sampling 100 times
00:24:18.920 --> 00:24:20.800
and then take the variance of my mean
00:24:20.800 --> 00:24:22.528
and that would give me the variance of
00:24:22.528 --> 00:24:24.718
my estimate, even though I don't have
00:24:24.718 --> 00:24:27.360
like even even though I have a rather
00:24:27.360 --> 00:24:29.270
small sample to draw that estimate
00:24:29.270 --> 00:24:29.820
from.
00:24:31.210 --> 00:24:33.040
If I have more data, I'm going to get
00:24:33.040 --> 00:24:34.900
more accurate estimates.
00:24:34.900 --> 00:24:39.189
So if I sample 1000 samples, I'm
00:24:39.190 --> 00:24:40.780
drawing samples with replacement.
00:24:42.390 --> 00:24:44.749
Then the averages become much more
00:24:44.750 --> 00:24:45.140
similar.
00:24:45.140 --> 00:24:49.650
So now Biscoe goes from 475 to 473 to
00:24:49.650 --> 00:24:52.220
484, so it's a much smaller range than
00:24:52.220 --> 00:24:54.382
it was when I drew 100 samples.
00:24:54.382 --> 00:24:56.635
So in general like, the more I'm able
00:24:56.635 --> 00:24:59.970
to draw, the tighter my estimate of the
00:24:59.970 --> 00:25:01.260
distribution will be.
00:25:01.870 --> 00:25:03.525
But it's always an estimate of the
00:25:03.525 --> 00:25:03.757
distribution.
00:25:03.757 --> 00:25:05.120
It's not the true distribution.
00:25:08.870 --> 00:25:10.560
So there's also other ways that we can
00:25:10.560 --> 00:25:12.100
try to measure this data set.
00:25:12.100 --> 00:25:16.120
So one idea is to try to measure the
00:25:16.120 --> 00:25:18.110
entropy of a particular variable.
00:25:19.420 --> 00:25:21.610
If the variable is discrete, which
00:25:21.610 --> 00:25:24.015
means that it has like integer values,
00:25:24.015 --> 00:25:26.400
it has a finite number of values.
00:25:27.450 --> 00:25:29.870
And then we can measure it by counting.
00:25:29.870 --> 00:25:34.100
So we can say that the entropy will be
00:25:34.100 --> 00:25:36.040
the negative sum all the different
00:25:36.040 --> 00:25:37.670
values of that variable of the
00:25:37.670 --> 00:25:39.360
probability of that value times the log
00:25:39.360 --> 00:25:40.470
probability of that value.
00:25:41.340 --> 00:25:42.550
And I can count it like this.
00:25:42.550 --> 00:25:44.300
I can just say in this case these are
00:25:44.300 --> 00:25:47.080
binary, so I just count how many times
00:25:47.080 --> 00:25:49.190
XI equals zero or the fraction of times
00:25:49.190 --> 00:25:51.030
that's the probability of X I = 0.
00:25:52.240 --> 00:25:54.494
The fraction times XI equals one and
00:25:54.494 --> 00:25:57.222
then my cross and then my not cross
00:25:57.222 --> 00:25:57.675
entropy.
00:25:57.675 --> 00:25:59.623
My entropy is the negative probability
00:25:59.623 --> 00:26:02.090
of XI equals zero times the log base
00:26:02.090 --> 00:26:04.290
two probability of XI equals 0 minus
00:26:04.290 --> 00:26:07.269
probability XI equals one times log
00:26:07.269 --> 00:26:09.310
probability of XI equal 1.
00:26:10.770 --> 00:26:13.460
The log base two thing is like a
00:26:13.460 --> 00:26:15.360
convention, and it means that this
00:26:15.360 --> 00:26:17.600
entropy is measured in bits.
00:26:17.600 --> 00:26:20.550
So it's essentially how many bits you
00:26:20.550 --> 00:26:23.686
would need theoretically to be able to
00:26:23.686 --> 00:26:25.570
like disambiguate this value or specify
00:26:25.570 --> 00:26:26.310
this value.
00:26:27.030 --> 00:26:29.690
If you had a, if your data were all
00:26:29.690 --> 00:26:31.540
ones, then you really don't need any
00:26:31.540 --> 00:26:32.929
bits to represent it because it's
00:26:32.930 --> 00:26:33.870
always A1.
00:26:33.870 --> 00:26:35.930
But if it's like a completely random
00:26:35.930 --> 00:26:38.469
value, 5050 chance that it's a zero or
00:26:38.469 --> 00:26:40.942
one, then you need one bit to represent
00:26:40.942 --> 00:26:42.965
it because you until you observe it,
00:26:42.965 --> 00:26:44.245
you have no idea what it is, so you
00:26:44.245 --> 00:26:47.030
need a full bit to represent that bit.
00:26:48.460 --> 00:26:50.470
So if I look at Island Biscoe, it's
00:26:50.470 --> 00:26:53.010
almost a 5050 chance, so the entropy is
00:26:53.010 --> 00:26:53.510
very high.
00:26:53.510 --> 00:26:54.580
It's .999.
00:26:55.280 --> 00:26:57.050
If I look at a different feature index,
00:26:57.050 --> 00:26:58.400
the one for Torgerson.
00:26:59.510 --> 00:27:02.460
Only like 15% of the Penguins come from
00:27:02.460 --> 00:27:05.100
tergesen and so the entropy is much
00:27:05.100 --> 00:27:05.690
lower.
00:27:05.690 --> 00:27:07.020
It's .69.
00:27:11.760 --> 00:27:14.140
We can also measure the entropy of
00:27:14.140 --> 00:27:16.130
continuous variables.
00:27:16.130 --> 00:27:19.030
So if I have, for example the Cullman
00:27:19.030 --> 00:27:19.700
length.
00:27:19.700 --> 00:27:21.500
Now I can't just like count how many
00:27:21.500 --> 00:27:23.450
times I observe each value of Coleman
00:27:23.450 --> 00:27:25.030
length, because those values may be
00:27:25.030 --> 00:27:25.420
unique.
00:27:25.420 --> 00:27:26.880
I'll probably observe each value
00:27:26.880 --> 00:27:27.620
exactly once.
00:27:28.730 --> 00:27:31.589
And so instead we need to we need to
00:27:31.590 --> 00:27:34.130
have other ways of estimating that
00:27:34.130 --> 00:27:35.560
continuous distribution.
00:27:36.890 --> 00:27:39.610
So mathematically, the entropy of the
00:27:39.610 --> 00:27:42.630
variable X is now the negative integral
00:27:42.630 --> 00:27:44.760
over all the possible values X of
00:27:44.760 --> 00:27:47.395
probability of X times log probability
00:27:47.395 --> 00:27:48.550
of X.
00:27:48.550 --> 00:27:51.300
But this becomes a kind of complicated
00:27:51.300 --> 00:27:55.110
in a way because our data, while the
00:27:55.110 --> 00:27:56.780
values may be continuous, we don't have
00:27:56.780 --> 00:27:58.850
access to a continuous or infinite
00:27:58.850 --> 00:27:59.510
amount of data.
00:28:00.160 --> 00:28:02.350
And so we always need to estimate this
00:28:02.350 --> 00:28:04.520
continuous distribution based on our
00:28:04.520 --> 00:28:05.400
discrete sample.
00:28:07.160 --> 00:28:08.500
There's a lot of different ways of
00:28:08.500 --> 00:28:10.550
doing this, but one of the most common
00:28:10.550 --> 00:28:14.467
is to break up our continuous variable
00:28:14.467 --> 00:28:17.882
into smaller discrete variables into
00:28:17.882 --> 00:28:20.430
smaller discrete ranges, and then count
00:28:20.430 --> 00:28:22.220
for each of those discrete ranges.
00:28:22.220 --> 00:28:23.460
So that's what I did here.
00:28:24.260 --> 00:28:27.320
So I get the XI for the.
00:28:27.320 --> 00:28:28.690
This is for the Coleman length.
00:28:30.780 --> 00:28:33.060
I forgot to include this printed value,
00:28:33.060 --> 00:28:35.790
but there's if I the printed value here
00:28:35.790 --> 00:28:37.600
is just like a lot I think like all the
00:28:37.600 --> 00:28:38.420
values are unique.
00:28:39.230 --> 00:28:42.000
And I'm creating like empty indices
00:28:42.000 --> 00:28:44.604
because I'm being lazy here for the X
00:28:44.604 --> 00:28:47.915
value and for the probability of each X
00:28:47.915 --> 00:28:48.290
value.
00:28:49.190 --> 00:28:51.000
And I'm setting a step size of 1.
00:28:52.010 --> 00:28:54.635
Then I loop from the minimum value plus
00:28:54.635 --> 00:28:57.167
half a step to the maximum value minus
00:28:57.167 --> 00:28:58.094
half a step.
00:28:58.094 --> 00:28:59.020
I take steps.
00:28:59.020 --> 00:29:01.799
So I take steps of 1 from maybe like
00:29:01.800 --> 00:29:05.348
whoops, from maybe like 30, stop from
00:29:05.348 --> 00:29:07.340
maybe 30 to 60.
00:29:07.340 --> 00:29:10.460
And for each of those steps I count how
00:29:10.460 --> 00:29:14.870
many times I see a value within a range
00:29:14.870 --> 00:29:16.750
of like my current value minus half
00:29:16.750 --> 00:29:18.050
step plus half step.
00:29:18.050 --> 00:29:20.485
So for example, the first one will be
00:29:20.485 --> 00:29:21.890
from say like.
00:29:21.950 --> 00:29:24.860
How many times do I observe the common
00:29:24.860 --> 00:29:27.900
length between like 31 and 32?
00:29:28.670 --> 00:29:30.676
And so that will be my mean.
00:29:30.676 --> 00:29:32.370
So this is I'm estimating the
00:29:32.370 --> 00:29:34.010
probability that it falls within this
00:29:34.010 --> 00:29:34.440
range.
00:29:35.380 --> 00:29:37.130
And then I can turn this into a
00:29:37.130 --> 00:29:39.940
continuous distribution by dividing by
00:29:39.940 --> 00:29:40.850
the step size.
00:29:42.310 --> 00:29:43.820
So that will make it comparable.
00:29:43.820 --> 00:29:44.960
If I were to choose different step
00:29:44.960 --> 00:29:47.050
sizes, I should get like fairly similar
00:29:47.050 --> 00:29:47.620
plots.
00:29:47.620 --> 00:29:50.330
And the one -, 20 is just to avoid a
00:29:50.330 --> 00:29:52.140
divide by zero without really changing
00:29:52.140 --> 00:29:52.610
much else.
00:29:54.690 --> 00:29:58.290
So then I plot it and the cross entropy
00:29:58.290 --> 00:30:01.727
is just the negative sum of all of
00:30:01.727 --> 00:30:04.750
these different probabilities that the
00:30:04.750 --> 00:30:06.875
discrete probabilities now of these
00:30:06.875 --> 00:30:10.010
different ranges times the log 2
00:30:10.010 --> 00:30:12.460
probability of each of those ranges.
00:30:13.090 --> 00:30:17.120
And then I need to multiply that by the
00:30:17.120 --> 00:30:18.680
step size as well, which in this case
00:30:18.680 --> 00:30:19.380
is just one.
00:30:24.540 --> 00:30:27.018
OK, and then so I get an estimate.
00:30:27.018 --> 00:30:28.540
So this is the plot.
00:30:28.540 --> 00:30:30.950
This is the probability.
00:30:30.950 --> 00:30:32.840
It's my estimate of the continuous
00:30:32.840 --> 00:30:36.345
probability now of each variable of
00:30:36.345 --> 00:30:37.270
each value of X.
00:30:37.950 --> 00:30:39.520
And then this is my estimate of the
00:30:39.520 --> 00:30:40.180
entropy.
00:30:45.190 --> 00:30:48.320
So as I mentioned, I would like
00:30:48.320 --> 00:30:50.640
continuous features are kind of tricky
00:30:50.640 --> 00:30:52.360
because it depends on.
00:30:52.360 --> 00:30:54.240
I can estimate their probabilities in
00:30:54.240 --> 00:30:56.310
different ways and that will give me
00:30:56.310 --> 00:30:58.790
different distributions and different
00:30:58.790 --> 00:31:00.400
measurements of things like entropy.
00:31:01.340 --> 00:31:04.420
So if I chose a different step size, if
00:31:04.420 --> 00:31:06.950
I step in .1, that means I'm going to
00:31:06.950 --> 00:31:08.719
count how many times I observe this
00:31:08.720 --> 00:31:11.220
continuous variable in little tiny
00:31:11.220 --> 00:31:11.660
ranges.
00:31:11.660 --> 00:31:14.010
How many times do I observe it between
00:31:14.010 --> 00:31:16.222
40.0 and 40.1?
00:31:16.222 --> 00:31:18.030
And sometimes I might have no
00:31:18.030 --> 00:31:19.630
observations because I only have like
00:31:19.630 --> 00:31:22.216
300 data points and so that's why when
00:31:22.216 --> 00:31:24.370
I plot it as a line plot, I get this
00:31:24.370 --> 00:31:25.965
like super spiky thing because I've got
00:31:25.965 --> 00:31:27.640
a bunch of zeros, but I didn't observe
00:31:27.640 --> 00:31:29.390
anything in those tiny step sizes.
00:31:29.390 --> 00:31:30.950
And then there's other times when I
00:31:30.950 --> 00:31:31.200
observe.
00:31:31.250 --> 00:31:32.260
Several points.
00:31:32.930 --> 00:31:34.630
Inside of a tiny step size.
00:31:36.100 --> 00:31:37.710
So these are different representations
00:31:37.710 --> 00:31:40.780
of the same data and it's kind of like
00:31:40.780 --> 00:31:43.312
up to us to decide to think about like
00:31:43.312 --> 00:31:45.690
which of these is a better
00:31:45.690 --> 00:31:47.360
representation, which one do we think
00:31:47.360 --> 00:31:49.290
more closely reflects the true
00:31:49.290 --> 00:31:50.100
distribution?
00:31:51.310 --> 00:31:53.600
And I guess I'll ask you, so do you
00:31:53.600 --> 00:31:55.750
think if I had to rely on one of these
00:31:55.750 --> 00:31:58.632
as a probability density estimate of
00:31:58.632 --> 00:32:01.360
this, of this variable, would you
00:32:01.360 --> 00:32:03.790
prefer the left side or the right side?
00:32:06.800 --> 00:32:07.090
Right.
00:32:08.680 --> 00:32:09.930
All right, I'll take a vote.
00:32:09.930 --> 00:32:11.590
So how many prefer the left side?
00:32:13.000 --> 00:32:14.850
How many prefer the right side?
00:32:14.850 --> 00:32:16.750
That's interesting.
00:32:17.770 --> 00:32:20.455
OK, so it's mixed and there's not
00:32:20.455 --> 00:32:22.650
really a right answer, but I personally
00:32:22.650 --> 00:32:23.853
would prefer the left side.
00:32:23.853 --> 00:32:25.960
And the reason is just because I don't
00:32:25.960 --> 00:32:26.433
really think.
00:32:26.433 --> 00:32:28.580
It's true that there's like a whole lot
00:32:28.580 --> 00:32:31.898
of Penguins that would have a length of
00:32:31.898 --> 00:32:32.750
like 40.5.
00:32:32.750 --> 00:32:35.190
But then it's almost impossible for a
00:32:35.190 --> 00:32:37.059
Penguin to have a length of 40.6.
00:32:37.059 --> 00:32:38.900
But then 40.7 is like pretty likely.
00:32:38.900 --> 00:32:41.185
Again, that's not, that's not my model
00:32:41.185 --> 00:32:42.440
of how the world works.
00:32:42.440 --> 00:32:44.370
I tend to think that this distribution
00:32:44.370 --> 00:32:45.870
should be pretty smooth, right?
00:32:45.870 --> 00:32:47.020
It might be a multimodal.
00:32:47.080 --> 00:32:50.486
Distribution you might have like the
00:32:50.486 --> 00:32:53.250
adult males, the adult females, and the
00:32:53.250 --> 00:32:54.850
kid Penguins.
00:32:54.850 --> 00:32:56.260
Maybe that's what it is.
00:32:57.300 --> 00:32:58.140
I don't really know.
00:32:58.140 --> 00:32:58.466
I'm not.
00:32:58.466 --> 00:32:59.700
I don't study Penguins.
00:32:59.700 --> 00:33:00.770
But it's possible.
00:33:03.440 --> 00:33:04.040
That's right.
00:33:07.480 --> 00:33:11.030
So the as I mentioned, the entropy
00:33:11.030 --> 00:33:12.350
measures how many bits?
00:33:12.350 --> 00:33:13.130
Question.
00:33:14.390 --> 00:33:14.960
Yeah.
00:33:30.580 --> 00:33:34.050
So that's a good question, comment so.
00:33:35.640 --> 00:33:37.275
The so you might choose.
00:33:37.275 --> 00:33:38.960
So you're saying that you chose this
00:33:38.960 --> 00:33:40.870
because the entropy is lower.
00:33:41.620 --> 00:33:45.100
The.
00:33:46.510 --> 00:33:48.210
So that kind of like makes sense
00:33:48.210 --> 00:33:51.420
intuitively, but I would say the reason
00:33:51.420 --> 00:33:54.369
that I wouldn't choose the entropy
00:33:54.370 --> 00:33:55.832
value as a way of choosing the
00:33:55.832 --> 00:33:58.095
distribution is that these entropy
00:33:58.095 --> 00:33:59.740
values are actually not like the true
00:33:59.740 --> 00:34:00.590
entropy values.
00:34:00.590 --> 00:34:02.920
They're just the estimate of the
00:34:02.920 --> 00:34:04.470
entropy based on the distribution that
00:34:04.470 --> 00:34:06.050
we estimated.
00:34:06.050 --> 00:34:08.160
And for example, if I really want to
00:34:08.160 --> 00:34:10.941
minimize this distribution or the
00:34:10.941 --> 00:34:12.800
entropy, I would say that my
00:34:12.800 --> 00:34:14.510
distribution is just like a bunch of
00:34:14.510 --> 00:34:16.050
delta functions, which means that.
00:34:16.100 --> 00:34:17.600
They say that each data point that I
00:34:17.600 --> 00:34:20.235
observed is equally likely.
00:34:20.235 --> 00:34:22.887
So if I have 300 data points and each
00:34:22.887 --> 00:34:24.518
one has a probability of one out of
00:34:24.518 --> 00:34:27.360
300, and that will minimize my entropy.
00:34:27.360 --> 00:34:29.660
But it will also mean that basically
00:34:29.660 --> 00:34:31.070
all I can do is represent those
00:34:31.070 --> 00:34:32.700
particular data points and I won't have
00:34:32.700 --> 00:34:34.540
any generalization to new data.
00:34:34.540 --> 00:34:37.386
So I think that's a really good point
00:34:37.386 --> 00:34:38.200
to bring up.
00:34:39.540 --> 00:34:42.970
That the we have to like, always
00:34:42.970 --> 00:34:45.430
remember that the measurements that we
00:34:45.430 --> 00:34:47.290
make on data are not like true
00:34:47.290 --> 00:34:47.560
measurements.
00:34:47.560 --> 00:34:48.012
They're not.
00:34:48.012 --> 00:34:49.585
They don't tell us anything, or they
00:34:49.585 --> 00:34:51.354
tell us something, but they don't
00:34:51.354 --> 00:34:52.746
reveal the true distribution.
00:34:52.746 --> 00:34:54.690
They only reveal what we've estimated
00:34:54.690 --> 00:34:55.939
about the distribution.
00:34:55.940 --> 00:34:57.907
And those estimates depend not only on
00:34:57.907 --> 00:34:59.519
the data that we're measuring, but the
00:34:59.520 --> 00:35:00.570
way that we measure it.
00:35:01.820 --> 00:35:04.593
So that's like a really tricky, that's
00:35:04.593 --> 00:35:07.590
like a really tricky concept that is
00:35:07.590 --> 00:35:09.329
kind of like the main concept that.
00:35:10.330 --> 00:35:12.280
That I'm trying to illustrate.
00:35:15.110 --> 00:35:17.257
All right, so the entropy measures like
00:35:17.257 --> 00:35:20.270
how many bits are required to store an
00:35:20.270 --> 00:35:22.872
element of data, the true entropy.
00:35:22.872 --> 00:35:25.320
So the true entropy again, if they
00:35:25.320 --> 00:35:27.250
were, if we were able to know the
00:35:27.250 --> 00:35:28.965
distribution, which we almost never
00:35:28.965 --> 00:35:29.350
know.
00:35:29.350 --> 00:35:31.960
But if we knew it, and we had an ideal
00:35:31.960 --> 00:35:34.230
way to store the data, then the entropy
00:35:34.230 --> 00:35:35.900
tells us how many bits we would need in
00:35:35.900 --> 00:35:37.390
order to store that data in the most
00:35:37.390 --> 00:35:38.750
compressed format possible.
00:35:43.500 --> 00:35:46.600
So does this mean that the entropy is a
00:35:46.600 --> 00:35:48.780
measure of information?
00:35:50.290 --> 00:35:50.970
So.
00:35:52.540 --> 00:35:54.419
How many people would say that the
00:35:54.420 --> 00:35:56.113
entropy is a measure?
00:35:56.113 --> 00:35:58.130
Is the information that the data
00:35:58.130 --> 00:35:59.140
contains?
00:36:00.740 --> 00:36:02.000
If yes, raise your hand.
00:36:04.860 --> 00:36:07.890
If no raise, raise your hand.
00:36:07.890 --> 00:36:10.260
OK, so most people more people say not,
00:36:10.260 --> 00:36:11.400
so why not?
00:36:13.870 --> 00:36:14.580
Just measures.
00:36:15.690 --> 00:36:16.130
Cortana.
00:36:19.330 --> 00:36:21.230
The information environment more like.
00:36:22.430 --> 00:36:25.340
The incoming data has like that
00:36:25.340 --> 00:36:25.710
element.
00:36:27.760 --> 00:36:29.580
The company information communication,
00:36:29.580 --> 00:36:32.200
but not correct, right?
00:36:32.200 --> 00:36:33.640
Yeah, so I think that I think what
00:36:33.640 --> 00:36:36.700
you're saying is that the entropy
00:36:36.700 --> 00:36:38.920
measures essentially like how hard it
00:36:38.920 --> 00:36:40.170
is to predict some variable.
00:36:40.820 --> 00:36:43.680
But it doesn't mean that variable like
00:36:43.680 --> 00:36:45.320
tells us anything about anything else,
00:36:45.320 --> 00:36:46.080
right?
00:36:46.080 --> 00:36:47.870
It's just how hard this variable is
00:36:47.870 --> 00:36:49.040
fixed, right?
00:36:49.040 --> 00:36:53.230
And so you could say so again that both
00:36:53.230 --> 00:36:54.680
of those answers can be correct.
00:36:55.530 --> 00:36:58.860
For example, if I have a random array,
00:36:58.860 --> 00:37:00.820
you would probably say like.
00:37:00.820 --> 00:37:02.210
Intuitively this doesn't contain
00:37:02.210 --> 00:37:02.863
information, right?
00:37:02.863 --> 00:37:05.370
If I just say I generated this random
00:37:05.370 --> 00:37:06.910
variable, it's a bunch of zeros and
00:37:06.910 --> 00:37:07.300
ones.
00:37:07.300 --> 00:37:09.260
I 5050 chance it's each one.
00:37:09.960 --> 00:37:13.120
Here's a whole TB of this like random
00:37:13.120 --> 00:37:15.325
variable that I generated for you now.
00:37:15.325 --> 00:37:16.810
Like how much is this worth?
00:37:17.430 --> 00:37:18.690
You would probably be like, it's not
00:37:18.690 --> 00:37:20.080
really worth anything because it
00:37:20.080 --> 00:37:21.926
doesn't like tell me anything about
00:37:21.926 --> 00:37:23.290
anything else, right?
00:37:23.290 --> 00:37:26.700
And so the IT contains in this case,
00:37:26.700 --> 00:37:29.060
like knowing the value of this random
00:37:29.060 --> 00:37:30.907
variable only gives me information
00:37:30.907 --> 00:37:32.250
about itself, it doesn't give me
00:37:32.250 --> 00:37:33.410
information about anything else.
00:37:34.210 --> 00:37:36.390
And so information is always a relative
00:37:36.390 --> 00:37:37.040
term, right?
00:37:37.840 --> 00:37:41.335
Information is the amount of
00:37:41.335 --> 00:37:43.630
uncertainty about something that's
00:37:43.630 --> 00:37:45.760
reduced by knowing something else.
00:37:45.760 --> 00:37:48.190
So if I know the temperature of today,
00:37:48.190 --> 00:37:50.090
then that might reduce my uncertainty
00:37:50.090 --> 00:37:52.810
about the temperature of tomorrow or
00:37:52.810 --> 00:37:54.010
whether it's a good idea to wear a
00:37:54.010 --> 00:37:55.740
jacket when I go out, right?
00:37:55.740 --> 00:37:57.330
So the temperature of today gives me
00:37:57.330 --> 00:37:58.383
information about that.
00:37:58.383 --> 00:38:00.839
But the but knowing the temperature of
00:38:00.839 --> 00:38:02.495
today does not give me any information
00:38:02.495 --> 00:38:04.260
about who's the President of the United
00:38:04.260 --> 00:38:04.895
States.
00:38:04.895 --> 00:38:07.040
So it has information about certain
00:38:07.040 --> 00:38:07.930
things and doesn't have.
00:38:07.980 --> 00:38:09.300
Information about other things.
00:38:12.900 --> 00:38:14.570
So we have this measure called
00:38:14.570 --> 00:38:19.410
information gain, which is a measure of
00:38:19.410 --> 00:38:23.221
how much information does one variable
00:38:23.221 --> 00:38:25.711
give me about another variable, or one
00:38:25.711 --> 00:38:28.201
set of variables give me about another
00:38:28.201 --> 00:38:29.979
variable or set of variables.
00:38:31.690 --> 00:38:35.820
So the information gain of Y given X is
00:38:35.820 --> 00:38:36.480
the.
00:38:37.590 --> 00:38:39.790
Is the entropy of Y my initial
00:38:39.790 --> 00:38:41.515
uncertainty and being able to predict
00:38:41.515 --> 00:38:41.850
Y?
00:38:42.860 --> 00:38:44.980
Minus the entropy of Y given X.
00:38:45.570 --> 00:38:47.230
In other words, like how uncertain am I
00:38:47.230 --> 00:38:50.320
still about why after I know X and this
00:38:50.320 --> 00:38:51.970
difference is the information gain.
00:38:51.970 --> 00:38:54.940
So if I want to know what is the
00:38:54.940 --> 00:38:57.280
temperature going to be in 5 minutes.
00:38:57.280 --> 00:38:59.389
So knowing the temperature right now
00:38:59.390 --> 00:39:01.450
has super high information gain, it
00:39:01.450 --> 00:39:04.350
reduces my entropy almost completely.
00:39:04.350 --> 00:39:05.530
Where knowing the temperature right
00:39:05.530 --> 00:39:07.418
now, if I want to know the temperature
00:39:07.418 --> 00:39:09.900
in 10 days, my information gain would
00:39:09.900 --> 00:39:10.380
be low.
00:39:10.380 --> 00:39:12.233
It might tell me like some guess about
00:39:12.233 --> 00:39:13.890
what season it is that can help a
00:39:13.890 --> 00:39:15.809
little bit, but it's not going to be
00:39:15.810 --> 00:39:16.400
very.
00:39:16.790 --> 00:39:18.450
Highly predictive of the temperature in
00:39:18.450 --> 00:39:18.910
10 days.
00:39:22.270 --> 00:39:25.140
So we can so we can also, of course
00:39:25.140 --> 00:39:25.980
compute this.
00:39:27.800 --> 00:39:28.590
With code.
00:39:28.590 --> 00:39:30.430
So here I'm computing the information
00:39:30.430 --> 00:39:32.560
gain over binary variables.
00:39:34.990 --> 00:39:39.020
Of some feature I = 0 in this case.
00:39:40.280 --> 00:39:41.800
With respect to male, female.
00:39:41.800 --> 00:39:44.120
So how much does a particular variable
00:39:44.120 --> 00:39:46.390
tell me about whether whether a Penguin
00:39:46.390 --> 00:39:47.840
is male or female?
00:39:49.010 --> 00:39:51.430
And so here this was a little bit
00:39:51.430 --> 00:39:53.600
tricky code wise because there was also
00:39:53.600 --> 00:39:56.760
unknown so I have to like ignore the
00:39:56.760 --> 00:39:57.680
unknown case.
00:39:57.680 --> 00:40:00.947
So I take I create a variable Y that is
00:40:00.947 --> 00:40:03.430
one if the Penguin is male and -, 1 if
00:40:03.430 --> 00:40:04.090
it's female.
00:40:05.160 --> 00:40:09.567
And then I extracted out the values of
00:40:09.567 --> 00:40:09.905
XI.
00:40:09.905 --> 00:40:13.150
So I got XI where I = 0 in this case.
00:40:13.150 --> 00:40:16.822
And then I took all the Xis where Y was
00:40:16.822 --> 00:40:19.321
male, where it was male and where it
00:40:19.321 --> 00:40:19.945
was female.
00:40:19.945 --> 00:40:21.507
So this is the male.
00:40:21.507 --> 00:40:23.159
I happens to correspond to island of
00:40:23.160 --> 00:40:23.690
Bisco.
00:40:23.690 --> 00:40:26.260
So this is like the bit string of the
00:40:26.260 --> 00:40:26.550
weather.
00:40:26.550 --> 00:40:28.650
Penguins came from the island of Biscoe
00:40:28.650 --> 00:40:30.490
and were male, and this is whether they
00:40:30.490 --> 00:40:32.190
came from Cisco and they were female.
00:40:34.110 --> 00:40:34.740
And.
00:40:35.810 --> 00:40:37.870
Then I'm counting how many times I see
00:40:37.870 --> 00:40:40.232
either male or female Penguins, and so
00:40:40.232 --> 00:40:42.127
I can use that to get the probability
00:40:42.127 --> 00:40:43.236
that a Penguin is male.
00:40:43.236 --> 00:40:44.700
And of course the probability that's
00:40:44.700 --> 00:40:46.092
female is 1 minus that.
00:40:46.092 --> 00:40:48.850
So I compute my entropy of Penguins
00:40:48.850 --> 00:40:50.400
being male or female.
00:40:50.400 --> 00:40:53.220
So probability y = 1 times log
00:40:53.220 --> 00:40:54.289
probability that minus.
00:40:55.070 --> 00:40:58.123
1 minus probability of y = 1 times log
00:40:58.123 --> 00:40:58.760
probability of that.
00:41:00.390 --> 00:41:03.180
And then I can get the probability that
00:41:03.180 --> 00:41:07.340
a male Penguin.
00:41:07.500 --> 00:41:08.260
00:41:09.460 --> 00:41:13.190
So this is the this is just the
00:41:13.190 --> 00:41:15.562
probability that a Penguin comes from
00:41:15.562 --> 00:41:15.875
Biscoe.
00:41:15.875 --> 00:41:17.940
So the probability that the sum of all
00:41:17.940 --> 00:41:20.015
the male and female Penguins that do
00:41:20.015 --> 00:41:21.603
not, sorry that do not come from Biscoe
00:41:21.603 --> 00:41:22.230
that are 0.
00:41:22.940 --> 00:41:24.070
Divide by the number.
00:41:25.070 --> 00:41:27.720
And then I can get the probability that
00:41:27.720 --> 00:41:28.820
a Penguin is.
00:41:29.950 --> 00:41:33.006
Is male given that it doesn't come from
00:41:33.006 --> 00:41:34.450
Biscoe, and the probability that
00:41:34.450 --> 00:41:36.410
Penguin is male given that it comes
00:41:36.410 --> 00:41:37.150
from Biscoe.
00:41:38.010 --> 00:41:40.550
And then finally I can compute my
00:41:40.550 --> 00:41:44.630
entropy of Y given X, which I can say
00:41:44.630 --> 00:41:46.240
there's different ways to express that.
00:41:46.240 --> 00:41:49.465
But here I express as the sum over the
00:41:49.465 --> 00:41:52.490
probability of whether the Penguin
00:41:52.490 --> 00:41:54.845
comes from VISCO or not, times the
00:41:54.845 --> 00:41:57.730
probability that the Penguin is male or
00:41:57.730 --> 00:42:00.290
female given that it came from Biscoe
00:42:00.290 --> 00:42:03.016
or not, times the log of that
00:42:03.016 --> 00:42:03.274
probability.
00:42:03.274 --> 00:42:05.975
And so I end up with this big term
00:42:05.975 --> 00:42:08.300
here, and so that's the entropy.
00:42:08.350 --> 00:42:12.720
The island given or the entropy of the
00:42:12.720 --> 00:42:15.510
sex of the Penguin given, whether it
00:42:15.510 --> 00:42:16.640
came from Biscoe or not.
00:42:17.240 --> 00:42:20.080
And if I compare those, I see that I
00:42:20.080 --> 00:42:21.297
gained very little information.
00:42:21.297 --> 00:42:23.560
So the so knowing what island of
00:42:23.560 --> 00:42:25.080
Penguin came from doesn't tell me much
00:42:25.080 --> 00:42:26.440
about whether it's male or female.
00:42:26.440 --> 00:42:28.420
That's not like a huge surprise,
00:42:28.420 --> 00:42:30.710
although it's not always exactly true.
00:42:30.710 --> 00:42:36.508
For example, something 49% of people in
00:42:36.508 --> 00:42:40.450
the United States are male and I think
00:42:40.450 --> 00:42:42.880
51% of people in China are male.
00:42:42.880 --> 00:42:44.570
So sometimes there is a slight
00:42:44.570 --> 00:42:45.980
distribution difference depending on
00:42:45.980 --> 00:42:47.150
where you come from and maybe that.
00:42:47.230 --> 00:42:49.220
The figure for some kinds of animals.
00:42:50.000 --> 00:42:51.740
But in any case, like quantitatively we
00:42:51.740 --> 00:42:54.010
can see knowing this island.
00:42:54.010 --> 00:42:55.690
Knowing that island tells me almost
00:42:55.690 --> 00:42:57.230
nothing about whether Penguins likely
00:42:57.230 --> 00:42:58.850
to be male or female, so the
00:42:58.850 --> 00:43:00.530
information gain is very small.
00:43:01.730 --> 00:43:03.230
Because it doesn't reduce the number of
00:43:03.230 --> 00:43:05.160
bits I need to represent whether each
00:43:05.160 --> 00:43:06.360
Penguin is male or female.
00:43:08.780 --> 00:43:10.590
We can also compute the information
00:43:10.590 --> 00:43:13.510
gain in a continuous case, so.
00:43:15.230 --> 00:43:18.550
So here I have again the same initial
00:43:18.550 --> 00:43:21.500
processing to get the male, female, Y
00:43:21.500 --> 00:43:22.020
value.
00:43:22.640 --> 00:43:24.980
And now I do a step through the
00:43:24.980 --> 00:43:28.110
different discrete ranges of the
00:43:28.110 --> 00:43:29.600
variable Kalman length.
00:43:30.710 --> 00:43:32.590
And I compute the probability that a
00:43:32.590 --> 00:43:34.300
variable falls within this range.
00:43:36.100 --> 00:43:38.855
And I also compute the probability that
00:43:38.855 --> 00:43:41.300
a Penguin is male given that it falls
00:43:41.300 --> 00:43:42.060
within a range.
00:43:42.670 --> 00:43:46.480
So that is, out of how many times does
00:43:46.480 --> 00:43:49.240
the value fall within this range and
00:43:49.240 --> 00:43:52.330
the Penguin is male divide by the
00:43:52.330 --> 00:43:53.855
number of times that it falls within
00:43:53.855 --> 00:43:55.665
this range, which was the last element
00:43:55.665 --> 00:43:56.440
of PX.
00:43:57.160 --> 00:43:59.320
And then I add this like very tiny
00:43:59.320 --> 00:44:01.110
value to avoid divide by zero.
00:44:02.830 --> 00:44:05.300
And then so now I have the probability
00:44:05.300 --> 00:44:07.340
that's male given each possible like
00:44:07.340 --> 00:44:08.350
little range of X.
00:44:09.100 --> 00:44:12.590
And I can then compute the entropy as a
00:44:12.590 --> 00:44:15.606
over probability of X times
00:44:15.606 --> 00:44:16.209
probability.
00:44:16.210 --> 00:44:19.240
Or the entropy of Y is computed as
00:44:19.240 --> 00:44:22.243
before and then the entropy of Y given
00:44:22.243 --> 00:44:24.681
X is the sum over probability of X
00:44:24.681 --> 00:44:26.815
times probability of Y given X times
00:44:26.815 --> 00:44:29.330
the log probability of Y given X.
00:44:31.430 --> 00:44:32.880
And then I can look at the information
00:44:32.880 --> 00:44:33.770
gain.
00:44:33.770 --> 00:44:38.660
So here here's the probability of X.
00:44:39.040 --> 00:44:39.780
00:44:40.400 --> 00:44:43.160
And here's the probability of y = 1
00:44:43.160 --> 00:44:43.680
given X.
00:44:44.570 --> 00:44:46.380
And the reason that these are different
00:44:46.380 --> 00:44:49.608
ranges is that is that probability of X
00:44:49.608 --> 00:44:52.260
is a continuous variable, so it should
00:44:52.260 --> 00:44:55.371
integrate to one, and probability of y
00:44:55.371 --> 00:44:58.600
= 1 given X will be somewhere between
00:44:58.600 --> 00:44:59.830
zero and one.
00:44:59.830 --> 00:45:01.442
But it's only modeling this discrete
00:45:01.442 --> 00:45:04.390
variable, so given a particular XY is
00:45:04.390 --> 00:45:06.429
equal to either zero or one, and so
00:45:06.430 --> 00:45:07.880
sometimes the probability could be as
00:45:07.880 --> 00:45:10.073
high as one and other times it could be
00:45:10.073 --> 00:45:10.381
0.
00:45:10.381 --> 00:45:12.660
It's just a discrete value condition on
00:45:12.660 --> 00:45:14.690
X where X is a continuous.
00:45:14.750 --> 00:45:17.280
Variable, but it's sometimes useful to
00:45:17.280 --> 00:45:20.150
plot these together, so lots and lots
00:45:20.150 --> 00:45:21.890
of times when I'm trying to solve some
00:45:21.890 --> 00:45:24.180
new problem, one of the first things
00:45:24.180 --> 00:45:26.510
I'll do is create plots like this for
00:45:26.510 --> 00:45:28.020
the different features to give me an
00:45:28.020 --> 00:45:30.650
understanding of like how linearly.
00:45:30.880 --> 00:45:32.725
How linear is the relationship between
00:45:32.725 --> 00:45:37.280
the features and the and the thing that
00:45:37.280 --> 00:45:38.160
I'm trying to predict?
00:45:39.070 --> 00:45:40.780
In this case, for example, there's a
00:45:40.780 --> 00:45:44.220
strong relationship, so if the common
00:45:44.220 --> 00:45:47.920
length is very high, then this Penguin
00:45:47.920 --> 00:45:49.310
is almost certainly male.
00:45:51.280 --> 00:45:52.330
If the.
00:45:53.150 --> 00:45:55.980
If the common length is moderately
00:45:55.980 --> 00:45:59.090
high, then it's pretty likely to be
00:45:59.090 --> 00:45:59.840
female.
00:46:00.900 --> 00:46:05.219
And if it's even lower, if it's even
00:46:05.220 --> 00:46:09.610
smaller, then it's kind of like roughly
00:46:09.610 --> 00:46:12.522
more evenly likely to be male and
00:46:12.522 --> 00:46:13.040
female.
00:46:13.040 --> 00:46:16.360
So again, this may not be too, this
00:46:16.360 --> 00:46:17.990
might not be super intuitive, like, why
00:46:17.990 --> 00:46:19.320
do we have this step here?
00:46:19.320 --> 00:46:22.454
But if you take my hypothesis that the
00:46:22.454 --> 00:46:25.580
adult male Penguins have large common
00:46:25.580 --> 00:46:27.481
links, and then adult female Penguins
00:46:27.481 --> 00:46:29.010
have the next largest.
00:46:30.070 --> 00:46:31.670
And then so there's like these
00:46:31.670 --> 00:46:32.980
different modes of the distribution,
00:46:32.980 --> 00:46:35.260
see these three humps, so this could be
00:46:35.260 --> 00:46:37.630
the adult male, adult female and the
00:46:37.630 --> 00:46:39.380
kids, which have a big range because
00:46:39.380 --> 00:46:41.290
they're different, different ages.
00:46:41.960 --> 00:46:44.300
And if you know it's a kid, then it
00:46:44.300 --> 00:46:44.640
doesn't.
00:46:44.640 --> 00:46:45.820
You don't really know if it's male or
00:46:45.820 --> 00:46:46.150
female.
00:46:46.150 --> 00:46:48.080
It could be a different, you know,
00:46:48.080 --> 00:46:51.980
bigger child or smaller child will kind
00:46:51.980 --> 00:46:54.290
of conflate with the gender.
00:46:56.150 --> 00:46:57.932
So if I looked at this then I might say
00:46:57.932 --> 00:46:58.861
I don't want to.
00:46:58.861 --> 00:47:00.610
I don't want to use this as part of a
00:47:00.610 --> 00:47:01.197
logistic regressor.
00:47:01.197 --> 00:47:03.650
I need a tree or I need to like cluster
00:47:03.650 --> 00:47:05.455
it or process this feature in some way
00:47:05.455 --> 00:47:06.990
to make this information more
00:47:06.990 --> 00:47:08.600
informative for my machine learning
00:47:08.600 --> 00:47:08.900
model.
00:47:10.510 --> 00:47:12.515
I'll take a break in just a minute, but
00:47:12.515 --> 00:47:13.930
I want to show him one more thing
00:47:13.930 --> 00:47:14.560
first.
00:47:14.560 --> 00:47:20.330
So again, like this is very subject to
00:47:20.330 --> 00:47:22.310
how I estimate these distributions.
00:47:22.310 --> 00:47:26.225
So if I choose a different step size,
00:47:26.225 --> 00:47:28.737
so here I choose a broader one, then I
00:47:28.737 --> 00:47:29.940
get a different probability
00:47:29.940 --> 00:47:32.060
distribution, I get a different P of X
00:47:32.060 --> 00:47:33.480
and I get a different conditional
00:47:33.480 --> 00:47:34.230
distribution.
00:47:34.910 --> 00:47:38.135
This P of X it's probably a bit too
00:47:38.135 --> 00:47:40.150
this step size is probably too big
00:47:40.150 --> 00:47:41.760
because it seemed like there were three
00:47:41.760 --> 00:47:44.530
modes which I can sort of interpret in
00:47:44.530 --> 00:47:45.050
some way.
00:47:45.050 --> 00:47:47.500
Making some guess where here I just had
00:47:47.500 --> 00:47:49.690
one mode I like basically smoothed out
00:47:49.690 --> 00:47:52.270
the whole distribution and I get a very
00:47:52.270 --> 00:47:56.240
different kind of like very much
00:47:56.240 --> 00:47:59.385
smoother probability of y = 1 given X
00:47:59.385 --> 00:48:00.010
estimate.
00:48:00.010 --> 00:48:02.082
So just using my intuition I think this
00:48:02.082 --> 00:48:03.580
is probably a better estimate than
00:48:03.580 --> 00:48:05.500
this, but it's something that you could
00:48:05.500 --> 00:48:06.050
validate.
00:48:06.110 --> 00:48:07.440
With their validation set, for example,
00:48:07.440 --> 00:48:09.430
to see given these two estimates of the
00:48:09.430 --> 00:48:11.520
distribution, which one better reflects
00:48:11.520 --> 00:48:12.850
some held out set of data.
00:48:12.850 --> 00:48:14.850
That's one way that you can that you
00:48:14.850 --> 00:48:16.850
can try to get a more concrete answer
00:48:16.850 --> 00:48:18.210
to what's the better way.
00:48:19.350 --> 00:48:21.437
And then these different ways of
00:48:21.437 --> 00:48:22.910
estimating this distribution lead to
00:48:22.910 --> 00:48:24.190
very different estimates of the
00:48:24.190 --> 00:48:25.150
information gain.
00:48:25.150 --> 00:48:28.142
So estimating it with a smoother with
00:48:28.142 --> 00:48:30.543
this bigger step size gives me a
00:48:30.543 --> 00:48:32.920
smoother distribution that reduces my
00:48:32.920 --> 00:48:35.200
information gain quite significantly.
00:48:39.480 --> 00:48:42.110
So let's take let's take a 2 minute
00:48:42.110 --> 00:48:42.680
break.
00:48:42.680 --> 00:48:46.200
I've been talking a lot and you can
00:48:46.200 --> 00:48:47.910
think about this like how can the
00:48:47.910 --> 00:48:49.430
information gain be different?
00:48:50.300 --> 00:48:52.100
Depending on our step size and what
00:48:52.100 --> 00:48:55.240
does this kind of like imply about our
00:48:55.240 --> 00:48:56.420
machine learning algorithms.
00:48:57.900 --> 00:48:59.390
Right, so I'll set it.
00:48:59.390 --> 00:49:01.480
I'll set a timer.
00:49:01.480 --> 00:49:03.240
Feel free to get up and stretch and
00:49:03.240 --> 00:49:05.170
talk or clear your brain or whatever.
00:49:59.230 --> 00:50:01.920
So why is the information, our
00:50:01.920 --> 00:50:04.655
information gain get improved from this
00:50:04.655 --> 00:50:06.060
slide to this slide?
00:50:06.060 --> 00:50:08.430
I'm kind of confused like these are
00:50:08.430 --> 00:50:09.275
different things.
00:50:09.275 --> 00:50:11.630
So here it's here, it's based on the
00:50:11.630 --> 00:50:12.246
common length.
00:50:12.246 --> 00:50:13.970
So I'm measuring the information gain
00:50:13.970 --> 00:50:15.645
of the Cullman length.
00:50:15.645 --> 00:50:17.850
So how much does Coleman length tell me
00:50:17.850 --> 00:50:19.900
about the male, female and then in the
00:50:19.900 --> 00:50:20.870
previous slide?
00:50:21.280 --> 00:50:22.515
Based on the island.
00:50:22.515 --> 00:50:25.723
So if I know in one case it's like if I
00:50:25.723 --> 00:50:27.300
know what island that came from, how
00:50:27.300 --> 00:50:29.505
much does that tell me about its
00:50:29.505 --> 00:50:31.080
whether it's male or female.
00:50:31.080 --> 00:50:32.850
And in this case, if I know the Cullman
00:50:32.850 --> 00:50:34.456
length, how much does that tell me
00:50:34.456 --> 00:50:35.832
about whether it's male or female?
00:50:35.832 --> 00:50:36.750
I see I see.
00:50:36.750 --> 00:50:39.093
So we changed to another feature.
00:50:39.093 --> 00:50:39.794
Yeah.
00:50:39.794 --> 00:50:42.770
So that I should have said that more
00:50:42.770 --> 00:50:43.090
clearly.
00:50:43.090 --> 00:50:45.690
But the I here is the feature index.
00:50:45.940 --> 00:50:46.353
I see.
00:50:46.353 --> 00:50:47.179
I see, I see.
00:50:47.180 --> 00:50:48.360
That makes sense, yeah.
00:50:50.170 --> 00:50:53.689
OK, it says we need like a check,
00:50:53.690 --> 00:50:53.980
right?
00:50:53.980 --> 00:50:54.805
Yeah.
00:50:54.805 --> 00:50:57.483
So I'm able to make the decision tree
00:50:57.483 --> 00:51:00.390
and I get, I get this like I get the
00:51:00.390 --> 00:51:02.226
first check is just less or equal to
00:51:02.226 --> 00:51:05.560
26, but the second check it differs
00:51:05.560 --> 00:51:07.897
from one side, it'll be like less than
00:51:07.897 --> 00:51:10.530
equal to 14.95 of depth and then one
00:51:10.530 --> 00:51:11.476
side it will be.
00:51:11.476 --> 00:51:13.603
So you want to look down on the tree
00:51:13.603 --> 00:51:14.793
here like here.
00:51:14.793 --> 00:51:17.050
You have basically a perfect
00:51:17.050 --> 00:51:18.760
classification here.
00:51:18.820 --> 00:51:21.050
Right here you have perfect
00:51:21.050 --> 00:51:22.860
classifications into Gen.
00:51:22.860 --> 00:51:23.260
2.
00:51:24.000 --> 00:51:28.360
And so these are two decisions that you
00:51:28.360 --> 00:51:28.970
could use.
00:51:28.970 --> 00:51:30.460
For example, right?
00:51:30.460 --> 00:51:32.710
Each of these paths give you a decision
00:51:32.710 --> 00:51:33.590
about whether it's a Gen.
00:51:33.590 --> 00:51:34.330
2 or not.
00:51:36.470 --> 00:51:39.660
So a decision is 1 path through the OR
00:51:39.660 --> 00:51:41.650
rule is like one path through the tree.
00:51:42.660 --> 00:51:46.200
So in so in the case of the work would
00:51:46.200 --> 00:51:48.026
you just because we need like a two
00:51:48.026 --> 00:51:48.870
check thing right?
00:51:48.870 --> 00:51:50.793
So are two check thing would be this
00:51:50.793 --> 00:51:53.770
and this for example if this is greater
00:51:53.770 --> 00:51:57.010
than that and if this is less than that
00:51:57.010 --> 00:51:58.420
then it's that.
00:52:03.860 --> 00:52:06.510
Yeah, I need to start, OK.
00:52:08.320 --> 00:52:10.760
Alright, so actually so one thing I
00:52:10.760 --> 00:52:12.585
want to clarify based is a question is
00:52:12.585 --> 00:52:14.460
that the things that I'm showing here
00:52:14.460 --> 00:52:15.631
are for different features.
00:52:15.631 --> 00:52:17.595
So I is the feature index.
00:52:17.595 --> 00:52:19.390
So the reason that these have different
00:52:19.390 --> 00:52:21.303
entropies, this was for island, we're
00:52:21.303 --> 00:52:22.897
here, I'm talking about Coleman length.
00:52:22.897 --> 00:52:24.970
So different features will give us
00:52:24.970 --> 00:52:26.655
different, different information gains
00:52:26.655 --> 00:52:28.300
about whether the Penguin is male or
00:52:28.300 --> 00:52:30.753
female and the particular feature index
00:52:30.753 --> 00:52:32.270
is just like here.
00:52:34.260 --> 00:52:35.550
All right, so.
00:52:36.750 --> 00:52:39.380
So why does someone have an answer?
00:52:39.380 --> 00:52:41.655
So why is it that the information gain
00:52:41.655 --> 00:52:43.040
is different depending on the step
00:52:43.040 --> 00:52:43.285
size?
00:52:43.285 --> 00:52:45.050
That seems a little bit unintuitive,
00:52:45.050 --> 00:52:45.350
right?
00:52:45.350 --> 00:52:45.870
Because.
00:52:46.520 --> 00:52:47.560
The same data.
00:52:47.560 --> 00:52:48.730
Why does it?
00:52:48.730 --> 00:52:50.830
Why does information gain depend on
00:52:50.830 --> 00:52:51.290
this?
00:52:51.290 --> 00:52:52.110
Yeah?
00:52:52.600 --> 00:52:56.200
If we have for a bigger step that we
00:52:56.200 --> 00:52:58.130
might overshoot and like, we might not
00:52:58.130 --> 00:52:59.330
capture those like.
00:53:01.000 --> 00:53:03.400
Local like optimized or like local?
00:53:09.790 --> 00:53:10.203
Right.
00:53:10.203 --> 00:53:13.070
So the answer was like, if we have a
00:53:13.070 --> 00:53:15.930
bigger step size, then we might like be
00:53:15.930 --> 00:53:17.525
grouping too many things together so
00:53:17.525 --> 00:53:19.905
that it no longer like contains the
00:53:19.905 --> 00:53:21.610
information that is needed to
00:53:21.610 --> 00:53:24.580
distinguish whether a Penguin is male
00:53:24.580 --> 00:53:25.417
or female, right?
00:53:25.417 --> 00:53:27.515
Or it contains less of that
00:53:27.515 --> 00:53:28.590
information, right.
00:53:28.590 --> 00:53:30.560
And so, like, the key concept that's
00:53:30.560 --> 00:53:33.020
really important to know is that.
00:53:33.590 --> 00:53:34.240
00:53:35.310 --> 00:53:37.820
Is that the information gain?
00:53:37.820 --> 00:53:41.008
It depends on how we use the data.
00:53:41.008 --> 00:53:43.130
It depends on how we model the data.
00:53:43.130 --> 00:53:45.110
So that the information gain is not
00:53:45.110 --> 00:53:47.400
really inherent in the data itself or
00:53:47.400 --> 00:53:48.400
even in.
00:53:48.400 --> 00:53:50.580
It doesn't even depend on the.
00:53:51.600 --> 00:53:54.290
The true distribution between the data
00:53:54.290 --> 00:53:56.600
and the thing that we're trying to
00:53:56.600 --> 00:53:57.190
predict.
00:53:57.190 --> 00:53:58.880
So there may be a theoretical
00:53:58.880 --> 00:54:00.730
information gain, which is if you knew
00:54:00.730 --> 00:54:03.360
the true distribution of and Y then
00:54:03.360 --> 00:54:04.570
what would be the probability of Y
00:54:04.570 --> 00:54:05.640
given X?
00:54:05.640 --> 00:54:08.370
But in practice, we never know the true
00:54:08.370 --> 00:54:08.750
distribution.
00:54:09.630 --> 00:54:12.540
It's only the actual information gain
00:54:12.540 --> 00:54:14.695
depends on how we model the data, how
00:54:14.695 --> 00:54:16.430
we're able to squeeze the information
00:54:16.430 --> 00:54:17.770
out and make a prediction.
00:54:17.770 --> 00:54:22.450
For example, if I were like in China or
00:54:22.450 --> 00:54:24.069
something and I stopped somebody and I
00:54:24.070 --> 00:54:27.270
say, how do I get like over how do I
00:54:27.270 --> 00:54:29.110
get to this place and they start
00:54:29.110 --> 00:54:31.490
talking to me in Chinese and I have no
00:54:31.490 --> 00:54:32.430
idea what they're saying.
00:54:33.080 --> 00:54:35.190
They have like all the information is
00:54:35.190 --> 00:54:36.390
in that data.
00:54:36.390 --> 00:54:38.390
Somebody else could use that
00:54:38.390 --> 00:54:40.070
information to get where they want to
00:54:40.070 --> 00:54:42.050
go, but I can't use it because I don't
00:54:42.050 --> 00:54:43.560
have the right model for that data.
00:54:43.560 --> 00:54:45.858
So the information gained to me is 0,
00:54:45.858 --> 00:54:47.502
but the information gain is somebody
00:54:47.502 --> 00:54:49.260
else could be very high because of
00:54:49.260 --> 00:54:49.926
their model.
00:54:49.926 --> 00:54:52.257
And in the same way like we can take
00:54:52.257 --> 00:54:54.870
the same data and that data may have no
00:54:54.870 --> 00:54:57.369
information gain if we don't model it
00:54:57.370 --> 00:54:58.970
correctly, if we're not sure how to
00:54:58.970 --> 00:55:01.520
model it or use the data to extract our
00:55:01.520 --> 00:55:02.490
predictions.
00:55:02.490 --> 00:55:03.070
But.
00:55:03.130 --> 00:55:05.670
As we get better models, we're able to
00:55:05.670 --> 00:55:07.830
improve the information gain that we
00:55:07.830 --> 00:55:09.130
can get from that same data.
00:55:09.130 --> 00:55:10.920
And so that's basically like the goal
00:55:10.920 --> 00:55:13.480
of machine learning is to be able to
00:55:13.480 --> 00:55:14.740
model the data and model the
00:55:14.740 --> 00:55:16.850
relationships in a way that maximizes
00:55:16.850 --> 00:55:19.350
your information gain for predicting
00:55:19.350 --> 00:55:20.470
the thing that you're trying to
00:55:20.470 --> 00:55:20.810
predict.
00:55:23.300 --> 00:55:26.760
So again, we only have an empirical
00:55:26.760 --> 00:55:28.630
estimate based on the observed samples.
00:55:30.680 --> 00:55:31.580
And so.
00:55:32.510 --> 00:55:34.000
So we don't know the true information
00:55:34.000 --> 00:55:36.070
gain, just some estimated information
00:55:36.070 --> 00:55:37.850
gain based on estimated probability
00:55:37.850 --> 00:55:38.560
distributions.
00:55:39.330 --> 00:55:40.930
If we had more data, we could probably
00:55:40.930 --> 00:55:42.200
get a better estimate.
00:55:43.950 --> 00:55:46.870
And when we're trying to estimate
00:55:46.870 --> 00:55:49.090
things based on continuous variables,
00:55:49.090 --> 00:55:50.270
then we have different choices of
00:55:50.270 --> 00:55:50.850
models.
00:55:50.850 --> 00:55:53.753
And so there's a tradeoff between like
00:55:53.753 --> 00:55:55.380
over smoothing or simplifying the
00:55:55.380 --> 00:55:57.770
distribution and making overly
00:55:57.770 --> 00:55:59.740
confident predictions based on small
00:55:59.740 --> 00:56:00.720
data samples.
00:56:00.720 --> 00:56:03.790
So over here, I may have like very good
00:56:03.790 --> 00:56:06.060
estimates for the probability that X
00:56:06.060 --> 00:56:07.840
falls within this broader range.
00:56:09.610 --> 00:56:11.770
But maybe I have, like, smoothed out
00:56:11.770 --> 00:56:13.960
the important information for
00:56:13.960 --> 00:56:15.630
determining whether the Penguin is male
00:56:15.630 --> 00:56:16.230
or female.
00:56:16.980 --> 00:56:19.090
Maybe over here I have much more
00:56:19.090 --> 00:56:20.590
uncertain estimates of each of these
00:56:20.590 --> 00:56:21.360
probabilities.
00:56:21.360 --> 00:56:23.090
Like, is the probability distribution
00:56:23.090 --> 00:56:23.920
really that spiky?
00:56:23.920 --> 00:56:24.990
It's probably not.
00:56:24.990 --> 00:56:26.000
It's probably.
00:56:26.000 --> 00:56:27.710
This is probably a mixture of a few
00:56:27.710 --> 00:56:28.460
Gaussians.
00:56:29.590 --> 00:56:31.490
Which would be a smoother bumpy
00:56:31.490 --> 00:56:34.430
distribution, but on the other hand
00:56:34.430 --> 00:56:35.770
I've like preserved more of the
00:56:35.770 --> 00:56:37.650
information that is needed I would
00:56:37.650 --> 00:56:40.860
think to classify the Penguin as male
00:56:40.860 --> 00:56:41.390
or female.
00:56:42.260 --> 00:56:43.420
So there's this tradeoff.
00:56:44.030 --> 00:56:46.010
And this is just another simple example
00:56:46.010 --> 00:56:47.540
of the bias variance tradeoff.
00:56:47.540 --> 00:56:51.740
So here I have a I have a low variance
00:56:51.740 --> 00:56:53.160
but high bias estimate.
00:56:53.160 --> 00:56:53.960
My distribution.
00:56:53.960 --> 00:56:55.100
It's overly smooth.
00:56:56.000 --> 00:57:00.290
And over there I have a I have a higher
00:57:00.290 --> 00:57:02.575
variance, lower bias estimate of the
00:57:02.575 --> 00:57:02.896
distribution.
00:57:02.896 --> 00:57:04.980
And if I made the step size really
00:57:04.980 --> 00:57:07.175
small so I had that super spiky
00:57:07.175 --> 00:57:08.973
distribution, then that would be a
00:57:08.973 --> 00:57:10.852
really low bias but very high variance
00:57:10.852 --> 00:57:11.135
estimate.
00:57:11.135 --> 00:57:13.250
If I resampled it, I might get spikes
00:57:13.250 --> 00:57:14.987
in totally different places, so a
00:57:14.987 --> 00:57:16.331
totally different estimate of the
00:57:16.331 --> 00:57:16.600
distribution.
00:57:20.910 --> 00:57:22.506
And it's also important to note that
00:57:22.506 --> 00:57:24.040
the that when you're dealing with
00:57:24.040 --> 00:57:25.570
something like the bias variance
00:57:25.570 --> 00:57:28.382
tradeoff, in this case the complexity
00:57:28.382 --> 00:57:29.985
parameter is a step size.
00:57:29.985 --> 00:57:32.200
The optimal parameter depends on how
00:57:32.200 --> 00:57:34.210
much data we have, because the more
00:57:34.210 --> 00:57:36.180
data we have, the lower the variance of
00:57:36.180 --> 00:57:39.870
our estimate and so you the ideal
00:57:39.870 --> 00:57:42.610
complexity changes.
00:57:42.610 --> 00:57:45.256
So if I had lots of data, lots and lots
00:57:45.256 --> 00:57:47.090
and lots of data, then maybe I would
00:57:47.090 --> 00:57:47.360
choose.
00:57:47.430 --> 00:57:49.745
Step size even smaller than one because
00:57:49.745 --> 00:57:51.660
I could estimate those probabilities
00:57:51.660 --> 00:57:53.143
pretty well given all that data.
00:57:53.143 --> 00:57:54.890
I could estimate those little tiny
00:57:54.890 --> 00:57:57.346
ranges where if I'd weigh less data
00:57:57.346 --> 00:57:59.060
than maybe this would become the better
00:57:59.060 --> 00:58:02.470
choice because I otherwise my estimate
00:58:02.470 --> 00:58:03.910
was step size of 1 would just be too
00:58:03.910 --> 00:58:04.580
noisy.
00:58:08.850 --> 00:58:11.050
So the true probability distribution,
00:58:11.050 --> 00:58:13.990
entropy and I mean and information gain
00:58:13.990 --> 00:58:14.880
cannot be known.
00:58:14.880 --> 00:58:16.820
We can only try to make our best
00:58:16.820 --> 00:58:17.420
estimate.
00:58:19.140 --> 00:58:21.550
Alright, so that was all just focusing
00:58:21.550 --> 00:58:22.620
on X.
00:58:22.620 --> 00:58:23.320
Pretty much.
00:58:23.320 --> 00:58:25.716
A little bit of X&Y, but mostly X.
00:58:25.716 --> 00:58:28.130
So let's come back to how this fits
00:58:28.130 --> 00:58:29.760
into the whole machine learning
00:58:29.760 --> 00:58:30.310
framework.
00:58:31.120 --> 00:58:34.240
So we can say that one way that we can
00:58:34.240 --> 00:58:35.100
look at this function.
00:58:35.100 --> 00:58:36.515
Here we're trying to find parameters
00:58:36.515 --> 00:58:39.720
that minimize the loss of our models
00:58:39.720 --> 00:58:41.020
predictions compared to the ground
00:58:41.020 --> 00:58:41.910
truth prediction.
00:58:42.840 --> 00:58:45.126
One way that we can view this is that
00:58:45.126 --> 00:58:48.380
we're we're trying to maximize the
00:58:48.380 --> 00:58:52.550
information gain of Y given X, maybe
00:58:52.550 --> 00:58:54.000
with some additional constraints and
00:58:54.000 --> 00:58:55.850
priors that will improve the robustness
00:58:55.850 --> 00:58:57.940
to limited data that essentially like
00:58:57.940 --> 00:59:00.290
find that trade off for us in the bias
00:59:00.290 --> 00:59:00.720
variance.
00:59:01.920 --> 00:59:02.590
Trade off?
00:59:03.910 --> 00:59:06.960
So I could rewrite this if I'm if my
00:59:06.960 --> 00:59:09.430
loss function is the log probability of
00:59:09.430 --> 00:59:10.090
Y given X.
00:59:11.840 --> 00:59:13.710
Or let's just say for now that I
00:59:13.710 --> 00:59:16.807
rewrite this in terms of the in terms
00:59:16.807 --> 00:59:18.940
of the conditional entropy, or in terms
00:59:18.940 --> 00:59:20.120
of the information gain.
00:59:20.920 --> 00:59:22.610
So let's say I want to find the
00:59:22.610 --> 00:59:23.990
parameters Theta.
00:59:23.990 --> 00:59:25.830
That means that.
00:59:26.770 --> 00:59:29.390
Minimize my negative information gain,
00:59:29.390 --> 00:59:31.730
otherwise maximize my information gain,
00:59:31.730 --> 00:59:32.010
right?
00:59:32.750 --> 00:59:36.690
So that is, I want to maximize the
00:59:36.690 --> 00:59:38.670
difference between the entropy.
00:59:39.690 --> 00:59:43.149
And the entropy of Y the entropy of Y
00:59:43.150 --> 00:59:45.240
given X or equivalently, minimize the
00:59:45.240 --> 00:59:45.920
negative of that.
00:59:46.760 --> 00:59:49.530
Plus some kind of regularization or
00:59:49.530 --> 00:59:52.300
penalty on having unlikely parameters.
00:59:52.300 --> 00:59:54.814
So this would typically be like our
00:59:54.814 --> 00:59:56.380
squared penalty regularization.
00:59:56.380 --> 00:59:57.480
I mean our squared weight
00:59:57.480 --> 00:59:58.290
regularization.
01:00:00.680 --> 01:00:06.810
And if I write down what this entropy
01:00:06.810 --> 01:00:08.810
of Y given X is, then it's just the
01:00:08.810 --> 01:00:12.019
integral over all my data over all
01:00:12.020 --> 01:00:15.839
possible values X of probability of X
01:00:15.840 --> 01:00:18.120
times log probability of Y given X.
01:00:19.490 --> 01:00:22.016
I don't have a continuous distribution
01:00:22.016 --> 01:00:23.685
of XI don't have infinite samples.
01:00:23.685 --> 01:00:25.810
I just have an empirical sample.
01:00:25.810 --> 01:00:27.750
I have a few observations, some limited
01:00:27.750 --> 01:00:30.220
number of observations, and so my
01:00:30.220 --> 01:00:33.040
estimate of this of this integral
01:00:33.040 --> 01:00:35.620
becomes a sum over all the samples I do
01:00:35.620 --> 01:00:36.050
have.
01:00:36.770 --> 01:00:38.700
Assuming that each of these are all
01:00:38.700 --> 01:00:40.850
equally likely, then they'll just be
01:00:40.850 --> 01:00:43.406
some constant for the probability of X.
01:00:43.406 --> 01:00:46.020
So I can kind of like ignore that in
01:00:46.020 --> 01:00:47.470
relative terms, right?
01:00:47.470 --> 01:00:49.830
So I have a over the probability of X
01:00:49.830 --> 01:00:51.170
which would just be like one over
01:00:51.170 --> 01:00:51.420
north.
01:00:52.260 --> 01:00:55.765
Times the negative log probability of
01:00:55.765 --> 01:00:58.510
the label or of the thing that I'm
01:00:58.510 --> 01:01:00.919
trying to predict for the NTH sample
01:01:00.920 --> 01:01:02.900
given the features of the NTH sample.
01:01:03.910 --> 01:01:06.814
And this is exactly the cross entropy.
01:01:06.814 --> 01:01:07.998
This is.
01:01:07.998 --> 01:01:10.180
If Y is a discrete variable, for
01:01:10.180 --> 01:01:11.663
example, this would give us our cross
01:01:11.663 --> 01:01:14.718
entropy, or even if it's not, this is
01:01:14.718 --> 01:01:15.331
the.
01:01:15.331 --> 01:01:18.870
This is the negative log likelihood of
01:01:18.870 --> 01:01:21.430
my labels given the data, and so this
01:01:21.430 --> 01:01:23.420
gives us the loss term that we use
01:01:23.420 --> 01:01:24.610
typically for deep network
01:01:24.610 --> 01:01:26.200
classification or for logistic
01:01:26.200 --> 01:01:26.850
regression.
01:01:27.470 --> 01:01:29.230
And so it's exactly the same as
01:01:29.230 --> 01:01:33.330
maximizing the information gain of the
01:01:33.330 --> 01:01:34.890
variables that we're trying to predict
01:01:34.890 --> 01:01:36.070
given the features that we have
01:01:36.070 --> 01:01:36.560
available.
01:01:39.760 --> 01:01:44.390
So I've been like manually computing
01:01:44.390 --> 01:01:46.207
information gain and probabilities and
01:01:46.207 --> 01:01:48.399
stuff like that using code, but like
01:01:48.400 --> 01:01:50.920
kind of like hand coding lots of stuff.
01:01:50.920 --> 01:01:53.370
But that has its limitations.
01:01:53.370 --> 01:01:56.670
Like I can analyze 11 continuous
01:01:56.670 --> 01:01:59.310
variable or maybe 2 features at once
01:01:59.310 --> 01:02:00.970
and I can come up with some function
01:02:00.970 --> 01:02:03.060
and look at it and use my intuition and
01:02:03.060 --> 01:02:04.570
try to like create a good model based
01:02:04.570 --> 01:02:05.176
on that.
01:02:05.176 --> 01:02:06.910
But if you have thousands of variables,
01:02:06.910 --> 01:02:08.535
it's just like completely impractical
01:02:08.535 --> 01:02:09.430
to do this.
01:02:09.490 --> 01:02:12.246
Right, it would take forever to try to
01:02:12.246 --> 01:02:14.076
like plot all the different features
01:02:14.076 --> 01:02:16.530
and plot combinations and try to like
01:02:16.530 --> 01:02:19.420
manually explore this a big data set.
01:02:19.790 --> 01:02:21.590
And so.
01:02:22.780 --> 01:02:24.370
So we need more like automatic
01:02:24.370 --> 01:02:26.110
approaches to figure out how we can
01:02:26.110 --> 01:02:29.890
maximize the information gain of Y
01:02:29.890 --> 01:02:31.140
given X.
01:02:31.140 --> 01:02:32.810
And so that's basically why we have
01:02:32.810 --> 01:02:33.843
machine learning.
01:02:33.843 --> 01:02:36.560
So in machine learning, we're trying to
01:02:36.560 --> 01:02:39.560
build encoders sometimes to try to
01:02:39.560 --> 01:02:41.740
automatically transform X into some
01:02:41.740 --> 01:02:44.160
representation that makes it easier to
01:02:44.160 --> 01:02:45.850
extract information about why.
01:02:47.110 --> 01:02:49.220
Sometimes, sometimes people do this
01:02:49.220 --> 01:02:49.517
part.
01:02:49.517 --> 01:02:51.326
Sometimes we like hand code the
01:02:51.326 --> 01:02:51.910
features right.
01:02:51.910 --> 01:02:54.270
We create histogram, a gradient
01:02:54.270 --> 01:03:00.120
features for images, or we like I could
01:03:00.120 --> 01:03:01.780
take that common length and split it
01:03:01.780 --> 01:03:03.409
into three different ranges that I
01:03:03.410 --> 01:03:06.185
think represent like the adult male and
01:03:06.185 --> 01:03:08.520
adult female and children for example.
01:03:09.480 --> 01:03:11.770
But sometimes some methods do this
01:03:11.770 --> 01:03:13.925
automatically, and then second we have
01:03:13.925 --> 01:03:15.940
some decoder, something that predicts Y
01:03:15.940 --> 01:03:18.050
from X that automatically extracts the
01:03:18.050 --> 01:03:18.730
information.
01:03:19.530 --> 01:03:22.260
About why from X so our logistic
01:03:22.260 --> 01:03:23.560
regressor for example.
01:03:26.940 --> 01:03:29.460
The most powerful machine learning
01:03:29.460 --> 01:03:32.870
algorithms smoothly combine the feature
01:03:32.870 --> 01:03:34.940
extraction with the decoding, the
01:03:34.940 --> 01:03:37.530
prediction and offer controls or
01:03:37.530 --> 01:03:39.370
protections against overfitting.
01:03:40.860 --> 01:03:43.910
So they both try to make as good
01:03:43.910 --> 01:03:45.190
predictions as possible and the
01:03:45.190 --> 01:03:47.290
training data, and they try to do it in
01:03:47.290 --> 01:03:49.920
a way that is not like overfitting or
01:03:49.920 --> 01:03:51.103
leading to like high variance
01:03:51.103 --> 01:03:52.300
predictions that aren't going to
01:03:52.300 --> 01:03:52.970
generalize well.
01:03:53.800 --> 01:03:55.750
Random forests, for example.
01:03:55.750 --> 01:03:58.070
We have these deep trees that partition
01:03:58.070 --> 01:04:00.830
the feature space, chunk it up, and
01:04:00.830 --> 01:04:03.445
they optimize by optimizing the
01:04:03.445 --> 01:04:04.180
information gain.
01:04:04.180 --> 01:04:04.770
At each step.
01:04:04.770 --> 01:04:06.140
Those trees are trained to try to
01:04:06.140 --> 01:04:07.830
maximize the information gain for the
01:04:07.830 --> 01:04:08.970
variable that you're predicting.
01:04:09.940 --> 01:04:13.300
And until you get some full tree, and
01:04:13.300 --> 01:04:15.910
so individually each of these trees has
01:04:15.910 --> 01:04:16.710
low bias.
01:04:16.710 --> 01:04:18.250
It makes very accurate predictions on
01:04:18.250 --> 01:04:20.480
the training data, but high variance.
01:04:20.480 --> 01:04:22.560
You might get different trees if you
01:04:22.560 --> 01:04:24.479
were to resample the training data.
01:04:25.350 --> 01:04:28.790
And then in a random forest you train a
01:04:28.790 --> 01:04:30.120
whole bunch of these trees with
01:04:30.120 --> 01:04:31.570
different subsets of features.
01:04:32.640 --> 01:04:34.010
And then you average over their
01:04:34.010 --> 01:04:36.550
predictions and that averaging reduces
01:04:36.550 --> 01:04:38.820
the variance and so at the end of the
01:04:38.820 --> 01:04:40.719
day you have like a low variance, low
01:04:40.720 --> 01:04:42.510
bias predictor.
01:04:44.560 --> 01:04:46.350
The boosted trees similarly.
01:04:47.560 --> 01:04:50.020
You have shallow trees this time that
01:04:50.020 --> 01:04:51.860
kind of have low variance individually,
01:04:51.860 --> 01:04:53.170
at least if you have a relatively
01:04:53.170 --> 01:04:54.640
uniform data distribution.
01:04:56.760 --> 01:04:59.000
They again partition the feature space
01:04:59.000 --> 01:05:01.250
by optimizing the information gain, now
01:05:01.250 --> 01:05:02.805
using all the features but on a
01:05:02.805 --> 01:05:04.120
weighted data sample.
01:05:04.120 --> 01:05:05.980
And then each tree is trained on some
01:05:05.980 --> 01:05:07.933
weighted sample that focuses more on
01:05:07.933 --> 01:05:09.560
the examples that previous trees
01:05:09.560 --> 01:05:12.245
misclassified in order to reduce the
01:05:12.245 --> 01:05:12.490
bias.
01:05:12.490 --> 01:05:14.240
So that a sequence of these little
01:05:14.240 --> 01:05:16.640
trees actually has like much lower bias
01:05:16.640 --> 01:05:18.690
than the first tree because they're
01:05:18.690 --> 01:05:20.200
incrementally trying to improve their
01:05:20.200 --> 01:05:21.120
prediction function.
01:05:22.780 --> 01:05:24.690
Now, the downside of the boosted
01:05:24.690 --> 01:05:27.950
decision trees, or the danger of them
01:05:27.950 --> 01:05:30.474
is that they will tend to focus more
01:05:30.474 --> 01:05:32.510
and more on smaller and smaller amounts
01:05:32.510 --> 01:05:33.840
of data that are just really hard to
01:05:33.840 --> 01:05:34.810
misclassify.
01:05:34.810 --> 01:05:36.410
Maybe some of that data was mislabeled
01:05:36.410 --> 01:05:38.040
and so that's why it's so hard to
01:05:38.040 --> 01:05:38.850
classify.
01:05:38.850 --> 01:05:40.626
And maybe it's just very unusual.
01:05:40.626 --> 01:05:43.250
And so as you train lots of these
01:05:43.250 --> 01:05:45.666
boosted trees, eventually they start to
01:05:45.666 --> 01:05:48.326
focus on like a tiny subset of data and
01:05:48.326 --> 01:05:50.080
that can cause high variance
01:05:50.080 --> 01:05:50.640
overfitting.
01:05:51.900 --> 01:05:54.075
And so random forests are very robust
01:05:54.075 --> 01:05:55.476
to overfitting boosted trees.
01:05:55.476 --> 01:05:57.700
You still have to be careful, careful
01:05:57.700 --> 01:06:00.060
about how big those trees are and how
01:06:00.060 --> 01:06:00.990
many of them you train.
01:06:02.650 --> 01:06:03.930
And then deep networks.
01:06:03.930 --> 01:06:05.709
So we have deep networks.
01:06:05.710 --> 01:06:08.066
The mantra of deep networks is end to
01:06:08.066 --> 01:06:11.342
end learning, which means that you just
01:06:11.342 --> 01:06:13.865
give it your simplest features.
01:06:13.865 --> 01:06:17.080
You try not to like, preprocess it too
01:06:17.080 --> 01:06:18.610
much, because then you're just like
01:06:18.610 --> 01:06:20.230
removing some information.
01:06:20.230 --> 01:06:21.856
So you don't compute hog features, you
01:06:21.856 --> 01:06:23.060
just give it pixels.
01:06:24.320 --> 01:06:29.420
And then the optimization is jointly
01:06:29.420 --> 01:06:32.100
trying to process those raw inputs into
01:06:32.100 --> 01:06:35.012
useful features, and then to use those
01:06:35.012 --> 01:06:37.140
useful features to make predictions.
01:06:37.790 --> 01:06:41.040
On your on your for your for your final
01:06:41.040 --> 01:06:41.795
prediction.
01:06:41.795 --> 01:06:44.290
And it's a joint optimization.
01:06:44.290 --> 01:06:47.010
So random forests and boosted trees
01:06:47.010 --> 01:06:50.245
sort of do this, but they're kind of
01:06:50.245 --> 01:06:50.795
like greedy.
01:06:50.795 --> 01:06:52.519
You're trying to you're greedy
01:06:52.520 --> 01:06:54.484
decisions to try to optimize your to
01:06:54.484 --> 01:06:56.070
try to like select your features and
01:06:56.070 --> 01:06:57.420
then use them for predictions.
01:06:58.100 --> 01:07:01.390
While deep networks are like not
01:07:01.390 --> 01:07:02.460
greedy, they're trying to do this
01:07:02.460 --> 01:07:05.460
global optimization to try to maximize
01:07:05.460 --> 01:07:07.750
the information gain of your prediction
01:07:07.750 --> 01:07:08.910
given your features.
01:07:09.670 --> 01:07:11.760
And this end to end learning of
01:07:11.760 --> 01:07:13.220
learning your features and prediction
01:07:13.220 --> 01:07:15.990
at the same time is a big reason why
01:07:15.990 --> 01:07:18.576
people often say that deep learning is
01:07:18.576 --> 01:07:20.869
like the best or it can be the best
01:07:20.870 --> 01:07:22.180
algorithm, at least if you have enough
01:07:22.180 --> 01:07:23.840
data to apply it.
01:07:25.250 --> 01:07:27.210
The intermediate features represent
01:07:27.210 --> 01:07:29.660
transformations of the data that are
01:07:29.660 --> 01:07:31.520
more easily reusable than, like tree
01:07:31.520 --> 01:07:32.609
partitions, for example.
01:07:32.610 --> 01:07:33.925
So this is another big advantage that
01:07:33.925 --> 01:07:36.433
you can take, like the output at some
01:07:36.433 --> 01:07:38.520
intermediate layer, and you can reuse
01:07:38.520 --> 01:07:40.450
it for some other problem, because it
01:07:40.450 --> 01:07:42.200
represents some kind of like
01:07:42.200 --> 01:07:44.446
transformation of image pixels, for
01:07:44.446 --> 01:07:47.510
example, in a way that may be
01:07:47.510 --> 01:07:49.090
semantically meaningful or meaningful
01:07:49.090 --> 01:07:51.250
for a bunch of different tests.
01:07:51.250 --> 01:07:52.870
I'll talk about that more later.
01:07:53.810 --> 01:07:54.660
In another lecture.
01:07:55.470 --> 01:07:57.460
And then the structure of the network,
01:07:57.460 --> 01:07:59.200
for example like the number of nodes
01:07:59.200 --> 01:08:01.290
per layer is something that can be used
01:08:01.290 --> 01:08:02.460
to control the overfitting.
01:08:02.460 --> 01:08:03.840
So you can kind of like squeeze the
01:08:03.840 --> 01:08:07.160
representation into say 512 floating
01:08:07.160 --> 01:08:09.660
point values and that can prevent.
01:08:10.820 --> 01:08:11.810
Prevent overfitting.
01:08:12.770 --> 01:08:15.000
And then often deep learning is used in
01:08:15.000 --> 01:08:17.200
conjunction with massive data sets
01:08:17.200 --> 01:08:18.730
which help to further reduce the
01:08:18.730 --> 01:08:20.210
variance so that you can apply a very
01:08:20.210 --> 01:08:21.250
powerful models.
01:08:22.140 --> 01:08:25.430
Which have low bias and then rely on
01:08:25.430 --> 01:08:27.240
your enormous amount of data to reduce
01:08:27.240 --> 01:08:28.050
the variance.
01:08:31.530 --> 01:08:33.855
So in deep networks, the big challenge,
01:08:33.855 --> 01:08:35.820
the long standing problem with deep
01:08:35.820 --> 01:08:37.400
networks was the optimization.
01:08:37.400 --> 01:08:40.530
So how do we like optimize a many layer
01:08:40.530 --> 01:08:41.070
network?
01:08:41.920 --> 01:08:45.500
And one of the key ideas there was the
01:08:45.500 --> 01:08:47.170
stochastic gradient descent and back
01:08:47.170 --> 01:08:47.723
propagation.
01:08:47.723 --> 01:08:50.720
So we update the weights by summing the
01:08:50.720 --> 01:08:52.875
products of the error gradients from
01:08:52.875 --> 01:08:55.150
the input of the weight to the output
01:08:55.150 --> 01:08:55.730
of the network.
01:08:55.730 --> 01:08:57.710
So we basically trace all the paths
01:08:57.710 --> 01:09:00.050
from some weight into our prediction,
01:09:00.050 --> 01:09:01.810
and then based on that we see how this
01:09:01.810 --> 01:09:03.416
weight contributed to the error.
01:09:03.416 --> 01:09:05.620
And we make a small step to try to
01:09:05.620 --> 01:09:07.850
reduce that error based on a limited
01:09:07.850 --> 01:09:09.510
set of observations.
01:09:11.150 --> 01:09:13.840
And then the back propagation is a kind
01:09:13.840 --> 01:09:16.060
of dynamic program that efficiently
01:09:16.060 --> 01:09:17.970
reuses the weight gradient computations
01:09:17.970 --> 01:09:21.020
that each layer to predict the to do
01:09:21.020 --> 01:09:23.460
the weight updates for the previous
01:09:23.460 --> 01:09:23.890
layer.
01:09:24.670 --> 01:09:27.389
So this step, even though it feels
01:09:27.390 --> 01:09:29.170
backpropagation, feels kind of
01:09:29.170 --> 01:09:31.750
complicated computationally, it's very
01:09:31.750 --> 01:09:32.480
efficient.
01:09:32.480 --> 01:09:33.520
It takes almost.
01:09:33.520 --> 01:09:35.940
It takes about the same amount of time
01:09:35.940 --> 01:09:38.090
to update your weights as to do a
01:09:38.090 --> 01:09:38.700
prediction.
01:09:41.550 --> 01:09:43.160
The deep networks are composed of
01:09:43.160 --> 01:09:44.225
layers and activations.
01:09:44.225 --> 01:09:46.590
So we have these like we talked about
01:09:46.590 --> 01:09:50.350
sigmoid activations, where the downside
01:09:50.350 --> 01:09:52.710
the sigmoids map everything from zero
01:09:52.710 --> 01:09:54.420
to one, and they're downside is that
01:09:54.420 --> 01:09:56.230
the gradient is always less than zero.
01:09:56.230 --> 01:09:57.800
Even at the peak the gradient is only
01:09:57.800 --> 01:10:01.000
.25, and at the gradient is really
01:10:01.000 --> 01:10:01.455
small.
01:10:01.455 --> 01:10:03.099
So if you have a lot of layers.
01:10:03.750 --> 01:10:07.740
The since the gradient update is based
01:10:07.740 --> 01:10:09.900
on a product of these gradients along
01:10:09.900 --> 01:10:11.800
the path, then if you have a whole
01:10:11.800 --> 01:10:13.360
bunch of sigmoids, the gradient keeps
01:10:13.360 --> 01:10:15.209
getting smaller and smaller and smaller
01:10:15.210 --> 01:10:17.320
as you go earlier in the network until
01:10:17.320 --> 01:10:19.226
it's essentially 0 at the beginning of
01:10:19.226 --> 01:10:20.680
the network, which means that you can't
01:10:20.680 --> 01:10:23.050
optimize like the early weights.
01:10:23.050 --> 01:10:25.220
That's the vanishing gradient problem,
01:10:25.220 --> 01:10:27.220
and that was one of the things that got
01:10:27.220 --> 01:10:29.160
like neural networks stuck for many
01:10:29.160 --> 01:10:29.800
years.
01:10:30.950 --> 01:10:31.440
Can you?
01:10:31.440 --> 01:10:32.210
Yeah.
01:10:33.700 --> 01:10:37.490
OK so the OK so first like if you look
01:10:37.490 --> 01:10:40.115
at the gradient of a sigmoid it looks
01:10:40.115 --> 01:10:41.030
like this right?
01:10:41.670 --> 01:10:45.460
And at the peak it's only 25 and then
01:10:45.460 --> 01:10:47.340
at the extreme values it's extremely
01:10:47.340 --> 01:10:48.175
small.
01:10:48.175 --> 01:10:51.290
And So what that means is if you're
01:10:51.290 --> 01:10:52.765
gradient, let's say this is the end of
01:10:52.765 --> 01:10:54.000
the network and this is the beginning.
01:10:54.650 --> 01:10:57.030
Your gradient update for this weight
01:10:57.030 --> 01:10:58.925
will be based on a product of gradients
01:10:58.925 --> 01:11:00.975
for all the weights in between this
01:11:00.975 --> 01:11:02.680
weight and the output.
01:11:03.360 --> 01:11:05.286
And if they're all sigmoid activations,
01:11:05.286 --> 01:11:07.190
all of those gradients are going to be
01:11:07.190 --> 01:11:08.072
less than one.
01:11:08.072 --> 01:11:09.860
And so when you take the product of a
01:11:09.860 --> 01:11:11.330
whole bunch of numbers that are less
01:11:11.330 --> 01:11:12.873
than one, you end up with a really,
01:11:12.873 --> 01:11:14.410
really small number, right?
01:11:14.410 --> 01:11:16.080
And so that's why you can't train a
01:11:16.080 --> 01:11:18.230
deep network using sigmoids, because
01:11:18.230 --> 01:11:20.975
the gradients get they like vanish by
01:11:20.975 --> 01:11:22.525
the time you get to the earlier layers.
01:11:22.525 --> 01:11:24.490
And so the early layers don't train.
01:11:25.120 --> 01:11:26.130
And then you end up with these
01:11:26.130 --> 01:11:27.960
uninformative layers that are sitting
01:11:27.960 --> 01:11:29.170
between the inputs and the final
01:11:29.170 --> 01:11:30.300
layers, so you get really bad
01:11:30.300 --> 01:11:30.910
predictions.
01:11:32.510 --> 01:11:34.390
So that's a sigmoid problem.
01:11:34.390 --> 01:11:36.980
Very loose have a gradient of zero or
01:11:36.980 --> 01:11:40.195
one everywhere, so the relay looks like
01:11:40.195 --> 01:11:40.892
that.
01:11:40.892 --> 01:11:43.996
And in this part the gradient is 1, and
01:11:43.996 --> 01:11:45.869
this part the gradient is zero.
01:11:45.870 --> 01:11:48.470
They helped get networks deeper because
01:11:48.470 --> 01:11:49.880
that gradient of one is perfect.
01:11:49.880 --> 01:11:51.150
It doesn't get bigger, it doesn't get
01:11:51.150 --> 01:11:52.010
smaller as you like.
01:11:52.010 --> 01:11:53.060
Go through a bunch of ones.
01:11:53.930 --> 01:11:56.310
But the problem is that you can have
01:11:56.310 --> 01:11:59.420
these dead Railers where like a
01:11:59.420 --> 01:12:02.140
activation for some node is 0 for most
01:12:02.140 --> 01:12:04.780
of the data and then it has no gradient
01:12:04.780 --> 01:12:07.240
going into the weight and then it never
01:12:07.240 --> 01:12:07.790
changes.
01:12:10.460 --> 01:12:13.690
And so then the final thing that kind
01:12:13.690 --> 01:12:15.179
of fixed this problem was this skip
01:12:15.180 --> 01:12:15.710
connection.
01:12:15.710 --> 01:12:18.000
So the skip connections are a shortcut
01:12:18.000 --> 01:12:19.950
around different layers of the network
01:12:19.950 --> 01:12:22.065
so that the gradients can flow along
01:12:22.065 --> 01:12:23.130
the skip connections.
01:12:23.880 --> 01:12:25.080
All the way to the beginning of the
01:12:25.080 --> 01:12:28.150
network and with a gradient of 1.
01:12:29.330 --> 01:12:30.900
So that was that was coming with the
01:12:30.900 --> 01:12:31.520
Resnet.
01:12:32.980 --> 01:12:33.750
And then?
01:12:35.510 --> 01:12:37.880
And then I also talked about how SGD
01:12:37.880 --> 01:12:39.185
has like a lot of different variants
01:12:39.185 --> 01:12:41.130
and tricks to improve the speed of the
01:12:41.130 --> 01:12:42.960
instability of the optimization.
01:12:42.960 --> 01:12:44.830
For example, we have momentum so that
01:12:44.830 --> 01:12:46.250
if you keep getting weight updates in
01:12:46.250 --> 01:12:47.730
the same direction, those weight
01:12:47.730 --> 01:12:49.394
updates get faster and faster to
01:12:49.394 --> 01:12:50.229
improve the speed.
01:12:51.200 --> 01:12:52.970
You also have these normalizations so
01:12:52.970 --> 01:12:54.690
that you don't focus too much on
01:12:54.690 --> 01:12:56.890
updating weights updating particular
01:12:56.890 --> 01:12:58.620
weights, but you try to minimize the
01:12:58.620 --> 01:13:00.420
overall path of like how much each
01:13:00.420 --> 01:13:01.120
weight changes.
01:13:02.610 --> 01:13:04.173
I didn't talk about it, but another
01:13:04.173 --> 01:13:06.320
another strategy is gradient clipping,
01:13:06.320 --> 01:13:08.990
where you say that a gradient can't be
01:13:08.990 --> 01:13:10.740
too big and that can improve further,
01:13:10.740 --> 01:13:13.515
improve the strategy, the stability of
01:13:13.515 --> 01:13:14.470
the optimization.
01:13:15.670 --> 01:13:18.570
And then most commonly people either
01:13:18.570 --> 01:13:21.410
use SGD plus momentum or atom which is
01:13:21.410 --> 01:13:23.290
one of the last things I talked about.
01:13:23.290 --> 01:13:25.740
But there's more advanced methods range
01:13:25.740 --> 01:13:27.760
or rectified atom with gradient centric
01:13:27.760 --> 01:13:29.860
gradient centering and look ahead which
01:13:29.860 --> 01:13:31.570
have like a whole bunch of complicated
01:13:31.570 --> 01:13:34.160
strategies for doing the same thing but
01:13:34.160 --> 01:13:35.420
just a better search.
01:13:39.250 --> 01:13:40.270
Alright, let me see.
01:13:40.270 --> 01:13:42.280
All right, so I think you probably
01:13:42.280 --> 01:13:44.459
don't want me to skip this, so let me
01:13:44.460 --> 01:13:45.330
talk about.
01:13:46.840 --> 01:13:48.860
Let me just talk about this in the last
01:13:48.860 --> 01:13:49.390
minute.
01:13:49.900 --> 01:13:53.860
And so the so the mid term, so this is
01:13:53.860 --> 01:13:55.703
so the midterm is only going to be on
01:13:55.703 --> 01:13:57.090
things that we've already covered up to
01:13:57.090 --> 01:13:57.280
now.
01:13:57.280 --> 01:13:58.935
It's not going to be on anything that
01:13:58.935 --> 01:14:00.621
we cover in the next couple of days.
01:14:00.621 --> 01:14:02.307
The things that we cover in the next
01:14:02.307 --> 01:14:03.550
couple of days are important for
01:14:03.550 --> 01:14:04.073
homework three.
01:14:04.073 --> 01:14:06.526
So don't skip the lectures or anything,
01:14:06.526 --> 01:14:08.986
but they're not going to be on the
01:14:08.986 --> 01:14:09.251
midterm.
01:14:09.251 --> 01:14:11.750
So the midterms on March 9th and now
01:14:11.750 --> 01:14:12.766
it'll be on Prairie learn.
01:14:12.766 --> 01:14:14.735
So the exam will be open for most of
01:14:14.735 --> 01:14:15.470
the day.
01:14:15.470 --> 01:14:17.600
You don't come here to take it, you
01:14:17.600 --> 01:14:19.650
just take it somewhere else.
01:14:20.070 --> 01:14:23.000
Wherever you are and the exam will be
01:14:23.000 --> 01:14:24.740
75 minutes long or longer.
01:14:24.740 --> 01:14:27.185
If you have dress accommodations and
01:14:27.185 --> 01:14:29.380
you sent them to me, it's mainly going
01:14:29.380 --> 01:14:30.730
to be multiple choice or multiple
01:14:30.730 --> 01:14:31.560
select.
01:14:31.560 --> 01:14:34.950
There's no coding complex calculations
01:14:34.950 --> 01:14:36.920
in it, mainly is like conceptual.
01:14:38.060 --> 01:14:40.350
You can, as I said, take it at home.
01:14:40.350 --> 01:14:42.670
It's open book, so it's not cheating to
01:14:42.670 --> 01:14:43.510
during the exam.
01:14:43.510 --> 01:14:45.630
Consult your notes, look at practice
01:14:45.630 --> 01:14:47.630
questions and answers, look at slides,
01:14:47.630 --> 01:14:48.590
search on the Internet.
01:14:48.590 --> 01:14:49.320
That's all fine.
01:14:50.030 --> 01:14:51.930
It would be cheating if you were to
01:14:51.930 --> 01:14:53.940
talk to a classmate about the exam
01:14:53.940 --> 01:14:55.550
after one, but not both of you have
01:14:55.550 --> 01:14:56.290
taken it.
01:14:56.290 --> 01:14:57.510
So don't try to find out.
01:14:57.510 --> 01:14:59.210
Don't have like one person or I don't
01:14:59.210 --> 01:15:00.120
want to give you ideas.
01:15:02.340 --> 01:15:02.970
I.
01:15:07.170 --> 01:15:09.080
It's also cheating of course to get
01:15:09.080 --> 01:15:10.490
help from another person during the
01:15:10.490 --> 01:15:10.910
exam.
01:15:10.910 --> 01:15:12.510
So like if I found out about either of
01:15:12.510 --> 01:15:13.960
those things, it would be a big deal,
01:15:13.960 --> 01:15:16.150
but I prefer it.
01:15:16.150 --> 01:15:17.260
Just don't do it.
01:15:17.370 --> 01:15:17.880
01:15:19.180 --> 01:15:21.330
And then also it's important to note
01:15:21.330 --> 01:15:22.717
you won't have time to look up all the
01:15:22.717 --> 01:15:22.895
answers.
01:15:22.895 --> 01:15:24.680
So it might sound like multiple choice.
01:15:24.680 --> 01:15:25.855
Open book is like really easy.
01:15:25.855 --> 01:15:27.156
You don't need to study it, just look
01:15:27.156 --> 01:15:28.015
it up when you get there.
01:15:28.015 --> 01:15:28.930
That will not work.
01:15:28.930 --> 01:15:31.879
I can almost guarantee you need to
01:15:31.880 --> 01:15:34.060
learn it ahead of time so that most of
01:15:34.060 --> 01:15:36.380
the answers and you may have time to
01:15:36.380 --> 01:15:37.960
look up one or two, but not more than
01:15:37.960 --> 01:15:38.100
that.
01:15:40.030 --> 01:15:42.600
I've got a list of some of the central
01:15:42.600 --> 01:15:44.210
topics here, and since we're at time,
01:15:44.210 --> 01:15:45.880
I'm not going to walk through it right
01:15:45.880 --> 01:15:46.970
now, but you can review it.
01:15:46.970 --> 01:15:48.040
The slides are posted.
01:15:48.760 --> 01:15:50.540
And then there's just some review
01:15:50.540 --> 01:15:51.230
questions.
01:15:51.230 --> 01:15:53.215
So you can look at these and I think
01:15:53.215 --> 01:15:55.140
the best way to study is to look at the
01:15:55.140 --> 01:15:56.720
practice questions that are posted on
01:15:56.720 --> 01:15:59.744
the website and use that not only to if
01:15:59.744 --> 01:16:01.322
those questions, but also how familiar
01:16:01.322 --> 01:16:02.970
are you with each of those concepts.
01:16:02.970 --> 01:16:04.830
And then go back and review the slides
01:16:04.830 --> 01:16:07.447
if you're like if you feel less
01:16:07.447 --> 01:16:08.660
familiar with the topic.
01:16:09.620 --> 01:16:11.186
Alright, so thank you.
01:16:11.186 --> 01:16:13.330
And on Thursday we're going to resume
01:16:13.330 --> 01:16:15.783
with CNN's and computer vision and
01:16:15.783 --> 01:16:17.320
we're getting into our section on
01:16:17.320 --> 01:16:19.190
applications, so like natural language
01:16:19.190 --> 01:16:20.510
processing and all kinds of other
01:16:20.510 --> 01:16:20.860
things.
01:16:27.120 --> 01:16:30.250
So we are.
01:16:32.010 --> 01:16:34.620
Start a code contains the code from
01:16:34.620 --> 01:16:37.380
homework wise that you normally load
01:16:37.380 --> 01:16:39.650
the data and numbers and yeah Aries,
01:16:39.650 --> 01:16:42.220
but essentially we should transform
01:16:42.220 --> 01:16:44.360
them into like my torch.