whisper-finetuning-for-asee / CS_441_2023_Spring_February_28,_2023.vtt
classen3's picture
Imported CS 441 audio/transcripts
a67be9a verified
WEBVTT Kind: captions; Language: en-US
NOTE
Created on 2024-02-07T21:01:05.3475172Z by ClassTranscribe
00:01:52.290 --> 00:01:52.710
Hello.
00:02:25.950 --> 00:02:29.350
Hey, good morning, everybody.
00:02:29.350 --> 00:02:30.740
Hope you had a good weekend.
00:02:33.550 --> 00:02:36.400
Alright, so today we're going to talk
00:02:36.400 --> 00:02:37.320
about Language.
00:02:38.020 --> 00:02:41.310
And there's like three kind of major
00:02:41.310 --> 00:02:43.520
concepts that I'm going to introduce.
00:02:43.520 --> 00:02:45.989
So a lot of, a lot of.
00:02:46.590 --> 00:02:47.600
Content.
00:02:48.220 --> 00:02:51.810
And the first is, the first is like how
00:02:51.810 --> 00:02:54.760
we can represent language as integers
00:02:54.760 --> 00:02:56.670
using Subword tokenization.
00:02:57.500 --> 00:03:00.220
The second is being able to represent
00:03:00.220 --> 00:03:02.110
text as continuous vectors.
00:03:03.060 --> 00:03:05.140
And the Oregon words as continuous
00:03:05.140 --> 00:03:05.690
vectors.
00:03:05.690 --> 00:03:07.300
And the third is a new kind of
00:03:07.300 --> 00:03:08.880
processing called Attention or
00:03:08.880 --> 00:03:09.640
Transformers.
00:03:10.360 --> 00:03:13.100
These are kind of also in order of
00:03:13.100 --> 00:03:16.430
increasing impact or?
00:03:17.120 --> 00:03:19.319
So like when I first learned about
00:03:19.320 --> 00:03:21.126
WordPiece or the bite pair Encoding,
00:03:21.126 --> 00:03:23.670
which is a way that you can represent
00:03:23.670 --> 00:03:25.400
any text with like a fixed size
00:03:25.400 --> 00:03:27.130
Vocabulary, I was like that's a really
00:03:27.130 --> 00:03:27.950
cool idea.
00:03:27.950 --> 00:03:29.579
And then when I first learned about
00:03:29.580 --> 00:03:31.400
Word2Vec, which is a way of
00:03:31.400 --> 00:03:33.890
representing words in a high
00:03:33.890 --> 00:03:36.360
dimensional continuous space instead of
00:03:36.360 --> 00:03:39.320
as like different integers or different
00:03:39.320 --> 00:03:41.550
discrete tokens, it kind of like blew
00:03:41.550 --> 00:03:41.960
my mind.
00:03:41.960 --> 00:03:43.210
I was like, that's crazy.
00:03:43.210 --> 00:03:45.580
Like you'll see that you can add words
00:03:45.580 --> 00:03:46.540
together and then like.
00:03:46.610 --> 00:03:48.300
Some of the words leads to another Word
00:03:48.300 --> 00:03:49.320
which makes sense.
00:03:50.560 --> 00:03:52.520
And then Transformers just kind of
00:03:52.520 --> 00:03:53.370
changed the world.
00:03:53.370 --> 00:03:56.310
So there's a lot of impact in these
00:03:56.310 --> 00:03:57.030
ideas.
00:03:59.200 --> 00:04:01.830
So in the last lecture we talked about
00:04:01.830 --> 00:04:02.840
vision.
00:04:04.500 --> 00:04:07.600
And with vision, you kind of build up
00:04:07.600 --> 00:04:09.480
this representation from pixels to
00:04:09.480 --> 00:04:12.268
texture to essentially groups of groups
00:04:12.268 --> 00:04:13.490
of groups of pixels, right?
00:04:13.490 --> 00:04:16.010
You have a compositional model, and
00:04:16.010 --> 00:04:17.150
that's modeled with.
00:04:18.090 --> 00:04:20.250
With Convolution often.
00:04:20.250 --> 00:04:22.223
So for example if you look at this is
00:04:22.223 --> 00:04:23.885
just a couple of pixels blown up.
00:04:23.885 --> 00:04:25.450
You probably have no idea what it is.
00:04:25.450 --> 00:04:27.335
When you zoom into just a few pixels,
00:04:27.335 --> 00:04:29.300
you can't identify anything.
00:04:30.650 --> 00:04:34.000
If you zoom out a little bit then you
00:04:34.000 --> 00:04:37.286
can probably see some kind of edge and
00:04:37.286 --> 00:04:38.440
a little bit of features.
00:04:38.440 --> 00:04:39.876
Does anyone who doesn't have the slides
00:04:39.876 --> 00:04:41.860
can you recognize what that is?
00:04:44.660 --> 00:04:46.080
No, not yet.
00:04:46.080 --> 00:04:47.980
What eyes?
00:04:50.350 --> 00:04:51.040
Then if you.
00:04:52.230 --> 00:04:54.230
If you zoom out, do you have the slide
00:04:54.230 --> 00:04:55.225
or no?
00:04:55.225 --> 00:04:57.070
Yeah, I said if you don't have the
00:04:57.070 --> 00:04:58.290
slides, if you have the whole slide,
00:04:58.290 --> 00:05:00.190
it's pretty easy because you can see
00:05:00.190 --> 00:05:00.710
the one.
00:05:02.010 --> 00:05:04.643
So you zoom out a little bit more then
00:05:04.643 --> 00:05:05.860
you can kind of you can see it's
00:05:05.860 --> 00:05:06.940
obviously a nose.
00:05:07.980 --> 00:05:10.310
Now you can see it's a raccoon and then
00:05:10.310 --> 00:05:11.730
you can see the whole thing.
00:05:12.810 --> 00:05:15.310
So when we build up, the visual
00:05:15.310 --> 00:05:17.520
representation is building up from
00:05:17.520 --> 00:05:21.060
little little elements raised pixels
00:05:21.060 --> 00:05:23.095
into bigger patterns and bigger
00:05:23.095 --> 00:05:24.870
patterns until it finally makes sense.
00:05:25.740 --> 00:05:27.410
And that's what the convolutional
00:05:27.410 --> 00:05:28.650
networks that we learned about are
00:05:28.650 --> 00:05:29.870
doing you.
00:05:29.870 --> 00:05:32.640
This is Alex net in the early layers
00:05:32.640 --> 00:05:34.710
just correspond to edges and colors,
00:05:34.710 --> 00:05:36.460
and then the next layer, the
00:05:36.460 --> 00:05:38.605
activations correspond to textures, and
00:05:38.605 --> 00:05:40.510
then little subparts, and then parts
00:05:40.510 --> 00:05:43.300
and then eventually objects in scenes.
00:05:44.070 --> 00:05:46.680
And so the deep learning process is
00:05:46.680 --> 00:05:49.600
like a compositional process of putting
00:05:49.600 --> 00:05:51.538
together small elements into bigger
00:05:51.538 --> 00:05:53.180
elements and recognizing patterns out
00:05:53.180 --> 00:05:54.930
of them, and then bringing together
00:05:54.930 --> 00:05:56.700
those patterns and recognizing larger
00:05:56.700 --> 00:05:57.760
patterns and so on.
00:06:00.320 --> 00:06:02.070
So, but now we're going to talk about
00:06:02.070 --> 00:06:02.890
Language.
00:06:04.240 --> 00:06:06.270
And you might think like in language,
00:06:06.270 --> 00:06:07.670
the meaning is already in the words,
00:06:07.670 --> 00:06:08.280
right?
00:06:08.280 --> 00:06:12.780
So if I say cat or dog or running or
00:06:12.780 --> 00:06:14.520
something like that, it evokes like a
00:06:14.520 --> 00:06:17.000
big evokes like a lot of meaning to you
00:06:17.000 --> 00:06:19.240
where a pixel doesn't evolve very evoke
00:06:19.240 --> 00:06:21.070
very much meaning or even a small
00:06:21.070 --> 00:06:21.510
patch.
00:06:22.490 --> 00:06:24.830
And so it might appear that language is
00:06:24.830 --> 00:06:27.200
going to be very easy, that we can use
00:06:27.200 --> 00:06:28.800
straightforward Representations very
00:06:28.800 --> 00:06:29.590
effectively.
00:06:30.780 --> 00:06:32.290
But it's not totally true.
00:06:32.290 --> 00:06:33.920
I mean, it's a little bit true and
00:06:33.920 --> 00:06:36.179
that's, but as you'll see, it's a bit
00:06:36.180 --> 00:06:37.400
more complicated than that.
00:06:38.950 --> 00:06:41.830
So, for example, if we consider this
00:06:41.830 --> 00:06:43.470
sentence on the left, he sat on the
00:06:43.470 --> 00:06:44.260
chair and it broke.
00:06:45.520 --> 00:06:47.265
Which of these is more similar?
00:06:47.265 --> 00:06:50.110
The chair says the department is broke.
00:06:50.110 --> 00:06:52.839
It's option number one or option #2.
00:06:52.840 --> 00:06:54.530
After sitting the seat is broken.
00:06:55.170 --> 00:06:55.620
2nd.
00:06:56.920 --> 00:06:59.450
So probably most of you would say, at
00:06:59.450 --> 00:07:01.050
least semantically, the second one is a
00:07:01.050 --> 00:07:02.210
lot more similar, right?
00:07:02.940 --> 00:07:04.735
But in terms of the words, the first
00:07:04.735 --> 00:07:05.858
one is a lot more similar.
00:07:05.858 --> 00:07:08.790
So in the first one it says includes
00:07:08.790 --> 00:07:09.600
chair, the broke.
00:07:09.600 --> 00:07:12.705
So all the keywords or most of the
00:07:12.705 --> 00:07:13.830
keywords in the sentence are.
00:07:13.830 --> 00:07:15.816
In this first sentence, the chair says
00:07:15.816 --> 00:07:17.100
the department is broke.
00:07:17.770 --> 00:07:20.280
Where the second sentence is only has
00:07:20.280 --> 00:07:22.020
in common the Word the which isn't very
00:07:22.020 --> 00:07:22.910
meaningful.
00:07:22.910 --> 00:07:25.250
So if you were to represent these
00:07:25.250 --> 00:07:28.032
sentences with if you represent the
00:07:28.032 --> 00:07:30.346
words as different integers and then
00:07:30.346 --> 00:07:32.462
you try to compare the similarities of
00:07:32.462 --> 00:07:34.610
these sentences, and especially if you
00:07:34.610 --> 00:07:36.155
also consider the frequency of the
00:07:36.155 --> 00:07:38.327
words in general, then these sentences
00:07:38.327 --> 00:07:39.450
would not match at all.
00:07:39.450 --> 00:07:41.010
They'd be like totally dissimilar
00:07:41.010 --> 00:07:43.087
because they don't have any keywords in
00:07:43.087 --> 00:07:45.649
common where these sentences would be
00:07:45.650 --> 00:07:46.540
pretty similar.
00:07:48.340 --> 00:07:50.380
So it's not that it's not, it's not
00:07:50.380 --> 00:07:51.140
super simple.
00:07:51.140 --> 00:07:52.740
We have to be a little bit careful.
00:07:54.190 --> 00:07:57.690
So one one thing that we have to be
00:07:57.690 --> 00:07:59.900
aware of is that the same Word, and by
00:07:59.900 --> 00:08:02.800
Word I mean a character sequence can
00:08:02.800 --> 00:08:04.540
mean different things.
00:08:04.540 --> 00:08:07.910
So for example chair in the first in
00:08:07.910 --> 00:08:10.370
the sentence on top and chair on the
00:08:10.370 --> 00:08:11.722
second one are different and one case
00:08:11.722 --> 00:08:13.540
it's like something you sit on and the
00:08:13.540 --> 00:08:15.030
other case it's a person that leads the
00:08:15.030 --> 00:08:15.470
department.
00:08:16.910 --> 00:08:17.810
Broke.
00:08:17.810 --> 00:08:21.500
It's either the divided the chair into
00:08:21.500 --> 00:08:23.400
different pieces or out of money,
00:08:23.400 --> 00:08:23.690
right?
00:08:23.690 --> 00:08:24.900
They're totally different meanings.
00:08:26.270 --> 00:08:27.930
You can also have different words that
00:08:27.930 --> 00:08:30.700
mean similar things, so.
00:08:30.900 --> 00:08:35.350
As so for example you could have.
00:08:36.230 --> 00:08:38.603
Sitting and set right, they're very
00:08:38.603 --> 00:08:40.509
different letter sequences, but they're
00:08:40.510 --> 00:08:42.640
both like referring to the same thing,
00:08:42.640 --> 00:08:43.834
or broken broken.
00:08:43.834 --> 00:08:47.035
They're different words, but they are
00:08:47.035 --> 00:08:48.240
very closely related.
00:08:50.090 --> 00:08:52.526
And importantly, the meaning of the
00:08:52.526 --> 00:08:53.890
word depends on the surrounding Word.
00:08:53.890 --> 00:08:55.850
So nobody has any trouble like
00:08:55.850 --> 00:08:57.215
interpreting any of these sentences.
00:08:57.215 --> 00:08:59.030
The third one is like a bit awkward,
00:08:59.030 --> 00:09:01.890
but everyone can interpret them and you
00:09:01.890 --> 00:09:03.370
instantly, you don't have to think
00:09:03.370 --> 00:09:05.086
about it when he's somebody says he sat
00:09:05.086 --> 00:09:06.531
on the chair and it broke you
00:09:06.531 --> 00:09:06.738
immediately.
00:09:06.738 --> 00:09:08.389
When you think of chair, you think of
00:09:08.390 --> 00:09:10.635
something you sit on, and if somebody
00:09:10.635 --> 00:09:12.283
says the chair says the department is
00:09:12.283 --> 00:09:14.166
broke, you don't think of a chair that
00:09:14.166 --> 00:09:16.049
you sit on, you immediately think of a
00:09:16.050 --> 00:09:18.023
person like saying that the
00:09:18.023 --> 00:09:18.940
department's out of money.
00:09:19.350 --> 00:09:21.700
So we reflexively like understand these
00:09:21.700 --> 00:09:25.150
words based on the surrounding words.
00:09:25.150 --> 00:09:26.780
And this simple idea that the Word
00:09:26.780 --> 00:09:28.667
meaning depends on the surrounding
00:09:28.667 --> 00:09:31.060
words is one of the.
00:09:31.640 --> 00:09:34.730
Underlying like key concepts for Word
00:09:34.730 --> 00:09:35.630
Representations.
00:09:40.140 --> 00:09:43.610
So if we want to analyze text, then the
00:09:43.610 --> 00:09:45.170
first thing that we need to do is to
00:09:45.170 --> 00:09:47.889
convert the text into tokens so that
00:09:47.890 --> 00:09:49.780
we're token is going to come up a lot.
00:09:51.150 --> 00:09:53.380
I token is just basically a unit of
00:09:53.380 --> 00:09:53.830
data.
00:09:54.610 --> 00:09:56.715
And so it can be an integer or it can
00:09:56.715 --> 00:09:58.570
be a vector that represents a data
00:09:58.570 --> 00:09:59.130
element.
00:10:00.260 --> 00:10:01.710
It's a unit of processing.
00:10:01.710 --> 00:10:05.380
So you can say that a document, you can
00:10:05.380 --> 00:10:07.063
divide up a document into chunks of
00:10:07.063 --> 00:10:09.243
data and then each of those chunks is a
00:10:09.243 --> 00:10:09.595
token.
00:10:09.595 --> 00:10:12.150
So token is can be used to mean many
00:10:12.150 --> 00:10:13.440
things, but it just means like the
00:10:13.440 --> 00:10:15.350
atomic data element essentially.
00:10:16.320 --> 00:10:18.750
So if you have integer tokens, then the
00:10:18.750 --> 00:10:20.530
values are not continuous.
00:10:20.530 --> 00:10:22.839
So for example, five is no closer to 10
00:10:22.840 --> 00:10:23.970
than it is to 5000.
00:10:23.970 --> 00:10:25.592
They're just different labels.
00:10:25.592 --> 00:10:28.720
They're just separate, separate
00:10:28.720 --> 00:10:29.330
elements.
00:10:30.520 --> 00:10:32.435
If you have Vector tokens, then usually
00:10:32.435 --> 00:10:33.740
those are done in a way so that
00:10:33.740 --> 00:10:34.990
similarity is meaningful.
00:10:34.990 --> 00:10:37.060
So if you take the L2 distance between
00:10:37.060 --> 00:10:39.050
tokens, that gives you a sense of how
00:10:39.050 --> 00:10:41.580
similar that the information is that's
00:10:41.580 --> 00:10:43.660
represented by those tokens, if they're
00:10:43.660 --> 00:10:45.260
vectors, or if or.
00:10:45.260 --> 00:10:46.920
It's also common to do dot Product or
00:10:46.920 --> 00:10:48.210
cosine question.
00:10:49.480 --> 00:10:51.600
Token is an atomic element.
00:10:53.230 --> 00:10:53.620
Smallest.
00:10:56.170 --> 00:10:59.250
Integer tokens are not continuous, so
00:10:59.250 --> 00:11:01.260
they cannot integer token be assigned
00:11:01.260 --> 00:11:03.490
to like a Word as well as a phrase in
00:11:03.490 --> 00:11:04.390
some situations?
00:11:04.390 --> 00:11:06.940
Or is it always the Word which I'm
00:11:06.940 --> 00:11:09.700
assuming is like the smallest possible?
00:11:10.860 --> 00:11:13.630
So the question is whether integer can
00:11:13.630 --> 00:11:16.410
be assigned to a phrase or something
00:11:16.410 --> 00:11:17.370
bigger than a Word.
00:11:17.370 --> 00:11:19.075
It could potentially.
00:11:19.075 --> 00:11:21.072
So that's all.
00:11:21.072 --> 00:11:24.819
In your first you typically have like 3
00:11:24.820 --> 00:11:26.750
layers of representation in a language
00:11:26.750 --> 00:11:27.226
system.
00:11:27.226 --> 00:11:29.790
The first layer is that you take a text
00:11:29.790 --> 00:11:31.346
sequence and you break it up into
00:11:31.346 --> 00:11:31.673
integers.
00:11:31.673 --> 00:11:33.650
And as I'll discuss, there's like many
00:11:33.650 --> 00:11:35.040
ways of doing that.
00:11:35.040 --> 00:11:37.270
One way is called Cent piece, in which
00:11:37.270 --> 00:11:39.416
case you can actually have tokens or
00:11:39.416 --> 00:11:40.149
integers that.
00:11:40.200 --> 00:11:42.530
Bridge common words, so like I am,
00:11:42.530 --> 00:11:43.950
might be represented with a single
00:11:43.950 --> 00:11:44.340
token.
00:11:45.380 --> 00:11:48.772
And the then from those integers, then
00:11:48.772 --> 00:11:50.430
you have a mapping to continuous
00:11:50.430 --> 00:11:52.227
vectors that's called the Embedding.
00:11:52.227 --> 00:11:54.485
And then from that Embedding you have
00:11:54.485 --> 00:11:57.035
like a bunch of processing usually,
00:11:57.035 --> 00:11:58.940
usually now using Transformers or
00:11:58.940 --> 00:12:01.440
Attention models, that is then
00:12:01.440 --> 00:12:03.300
producing your final representation and
00:12:03.300 --> 00:12:04.070
prediction.
00:12:07.730 --> 00:12:09.760
So let's look at the different ways.
00:12:09.760 --> 00:12:11.400
So first we're going to talk about how
00:12:11.400 --> 00:12:13.430
we can map words into integers.
00:12:13.430 --> 00:12:17.270
So we'll mention three ways of doing
00:12:17.270 --> 00:12:17.500
that.
00:12:18.140 --> 00:12:21.480
So one way is that we just.
00:12:22.230 --> 00:12:25.770
We dulaney each word with a space
00:12:25.770 --> 00:12:25.970
South.
00:12:25.970 --> 00:12:28.900
We divide characters according to like
00:12:28.900 --> 00:12:29.530
spaces.
00:12:30.140 --> 00:12:32.530
And for each unique character string,
00:12:32.530 --> 00:12:35.190
after doing that to some document, we
00:12:35.190 --> 00:12:36.830
assign it to a different integer.
00:12:36.830 --> 00:12:39.230
And usually when we do this, we would
00:12:39.230 --> 00:12:41.410
say that we're going to represent up to
00:12:41.410 --> 00:12:43.660
say 30,000 or 50,000 words.
00:12:44.370 --> 00:12:45.840
And we're only going to assign the most
00:12:45.840 --> 00:12:48.010
frequent words to integers, and
00:12:48.010 --> 00:12:50.290
anything else will be like have a
00:12:50.290 --> 00:12:52.680
special token, unk or unknown.
00:12:54.250 --> 00:12:57.511
So in this case chair would be assigned
00:12:57.511 --> 00:12:59.374
to 1 integer though would be assigned
00:12:59.374 --> 00:13:00.856
to another integer, says would be
00:13:00.856 --> 00:13:02.380
assigned to another integer and so on.
00:13:03.540 --> 00:13:07.630
What is one advantage of this method?
00:13:09.260 --> 00:13:10.630
Yeah, it's pretty simple.
00:13:10.630 --> 00:13:12.080
That's a big advantage.
00:13:12.080 --> 00:13:13.410
What's another advantage?
00:13:25.150 --> 00:13:26.150
Maybe, yeah.
00:13:26.150 --> 00:13:28.600
Memory saving, yeah, could be.
00:13:29.570 --> 00:13:31.970
Maybe the other advantages won't be so
00:13:31.970 --> 00:13:34.190
clear unless I make them in comparison
00:13:34.190 --> 00:13:36.860
to others, but what's one disadvantage?
00:13:38.090 --> 00:13:38.870
It's hard to tell.
00:13:38.870 --> 00:13:41.910
Alright, sorry, go ahead.
00:13:42.910 --> 00:13:44.690
All the other ones, because they're the
00:13:44.690 --> 00:13:47.080
large variation of what others are
00:13:47.080 --> 00:13:48.230
considered, they're all bunch of
00:13:48.230 --> 00:13:48.720
important.
00:13:48.720 --> 00:13:51.260
So that might be more like I guess, on
00:13:51.260 --> 00:13:53.170
Accuracy, because you bundle all
00:13:53.170 --> 00:13:54.150
unknowns into one.
00:13:54.150 --> 00:13:56.270
A lot of them could be nouns, only a
00:13:56.270 --> 00:13:58.580
small could be very than the adjectives
00:13:58.580 --> 00:13:59.240
and whatnot.
00:13:59.240 --> 00:14:01.910
That's what like affective or excluded.
00:14:02.360 --> 00:14:05.200
So that's one big disadvantage is that
00:14:05.200 --> 00:14:06.280
you might end up with a bunch of
00:14:06.280 --> 00:14:09.140
unknowns and there could be lots of
00:14:09.140 --> 00:14:10.160
those potentially.
00:14:10.160 --> 00:14:10.930
What was yours?
00:14:11.040 --> 00:14:11.510
Another.
00:14:13.390 --> 00:14:15.410
This Word have different integers and
00:14:15.410 --> 00:14:17.320
they're comparing this Norm will not
00:14:17.320 --> 00:14:18.250
making any sense.
00:14:19.620 --> 00:14:20.590
Comparing what?
00:14:20.590 --> 00:14:22.590
The integers?
00:14:22.590 --> 00:14:24.110
You can't compare the integers to each
00:14:24.110 --> 00:14:24.400
other.
00:14:25.610 --> 00:14:26.780
That's true.
00:14:26.780 --> 00:14:30.280
So here's what here's what I have in
00:14:30.280 --> 00:14:32.430
terms of just pure as a strategy of
00:14:32.430 --> 00:14:33.730
mapping words to integers.
00:14:33.730 --> 00:14:36.160
So that problem of that the integers
00:14:36.160 --> 00:14:37.990
are not comparable will be the case for
00:14:37.990 --> 00:14:39.553
all three of these methods, but it will
00:14:39.553 --> 00:14:41.750
be fixed in the next section.
00:14:42.830 --> 00:14:44.960
So Pros simple.
00:14:46.250 --> 00:14:49.290
Another Pro is that words do have like
00:14:49.290 --> 00:14:50.940
a fair amount of meaning in them, so
00:14:50.940 --> 00:14:53.380
you can, for example, if you have full
00:14:53.380 --> 00:14:55.700
documents, you can represent them as
00:14:55.700 --> 00:14:56.970
counts of the different words.
00:14:57.830 --> 00:14:59.520
And then you can use those counts to
00:14:59.520 --> 00:15:02.420
retrieve other documents or to try to
00:15:02.420 --> 00:15:04.410
classify a spam or something like that,
00:15:04.410 --> 00:15:05.750
and it will work fairly well.
00:15:05.750 --> 00:15:06.530
It's not terrible.
00:15:07.610 --> 00:15:09.860
So a Word by a Word on its own, often
00:15:09.860 --> 00:15:10.370
as a meaning.
00:15:10.370 --> 00:15:12.090
Now, sometimes it can have more than
00:15:12.090 --> 00:15:14.325
one meaning, but for the most, for the
00:15:14.325 --> 00:15:15.630
most part, it's pretty meaningful.
00:15:17.640 --> 00:15:19.405
There's also some big disadvantages.
00:15:19.405 --> 00:15:22.755
So as one was raised that many words
00:15:22.755 --> 00:15:25.347
will map to unknown and you might not
00:15:25.347 --> 00:15:28.078
like if you say I have a 30,000 word
00:15:28.078 --> 00:15:29.380
dictionary, you might think that's
00:15:29.380 --> 00:15:31.350
quite a lot, but it's not that much
00:15:31.350 --> 00:15:35.220
because all the different forms of each
00:15:35.220 --> 00:15:36.640
Word will be mapped to different
00:15:36.640 --> 00:15:37.450
tokens.
00:15:37.450 --> 00:15:39.830
And so you actually have like a huge
00:15:39.830 --> 00:15:42.470
potential dictionary if there's names,
00:15:42.470 --> 00:15:45.690
unusual words like anachronism.
00:15:45.740 --> 00:15:46.700
Or numbers.
00:15:46.700 --> 00:15:48.210
All of those will get mapped to
00:15:48.210 --> 00:15:48.820
unknown.
00:15:49.840 --> 00:15:51.790
So that can create some problems.
00:15:51.790 --> 00:15:54.220
You need a really large vocabulary, so
00:15:54.220 --> 00:15:56.420
if you want to try to have not too many
00:15:56.420 --> 00:15:57.490
unknowns, then you need.
00:15:57.490 --> 00:15:58.860
You might even need like hundreds of
00:15:58.860 --> 00:16:01.470
thousands of dictionary elements.
00:16:02.430 --> 00:16:04.000
And Vocabulary.
00:16:04.000 --> 00:16:06.010
That's basically the set of things that
00:16:06.010 --> 00:16:07.930
you're representing.
00:16:07.930 --> 00:16:10.380
So it's like this set of like character
00:16:10.380 --> 00:16:11.880
combinations that you'll map to
00:16:11.880 --> 00:16:12.820
different integers.
00:16:14.040 --> 00:16:15.710
And then and then.
00:16:15.710 --> 00:16:17.430
It also doesn't model the similarity of
00:16:17.430 --> 00:16:18.390
related words.
00:16:18.390 --> 00:16:20.820
Broken, broken, which was another point
00:16:20.820 --> 00:16:22.440
brought up.
00:16:22.440 --> 00:16:25.340
So very similar strings get mapped to
00:16:25.340 --> 00:16:26.650
different integers, and there are no
00:16:26.650 --> 00:16:28.260
more similar than any other integers.
00:16:30.530 --> 00:16:32.990
Another extreme that we could do is to
00:16:32.990 --> 00:16:35.180
map each character to an integer.
00:16:36.520 --> 00:16:38.660
So it's as simple as that.
00:16:38.660 --> 00:16:42.230
There's 256 bytes and each Byte is
00:16:42.230 --> 00:16:44.346
represented as a different number.
00:16:44.346 --> 00:16:47.370
And you could like use a reduced
00:16:47.370 --> 00:16:49.670
Vocabulary of justice, like of letters
00:16:49.670 --> 00:16:51.730
and numbers and punctuation, but at
00:16:51.730 --> 00:16:53.140
most you have 256.
00:16:53.920 --> 00:16:56.330
So what is the upside of this idea?
00:16:59.190 --> 00:16:59.900
That that's not.
00:17:02.560 --> 00:17:04.080
So this is even simpler.
00:17:04.080 --> 00:17:05.550
This is like the simplest thing you can
00:17:05.550 --> 00:17:05.705
do.
00:17:05.705 --> 00:17:07.040
You don't even have to look at
00:17:07.040 --> 00:17:09.140
frequencies to select your Vocabulary.
00:17:10.960 --> 00:17:11.380
What else?
00:17:11.380 --> 00:17:13.040
What's another big advantage?
00:17:13.040 --> 00:17:14.160
Can anyone think of another one?
00:17:16.630 --> 00:17:18.760
I really words into a.
00:17:21.340 --> 00:17:24.670
So every single, yeah, like any text
00:17:24.670 --> 00:17:25.500
can be mapped.
00:17:25.500 --> 00:17:27.090
With this, you'll have no unknowns,
00:17:27.090 --> 00:17:27.410
right?
00:17:27.410 --> 00:17:28.700
Because it's covering, because
00:17:28.700 --> 00:17:30.390
everything basically maps to bytes.
00:17:30.390 --> 00:17:32.850
So you can represent anything this way,
00:17:32.850 --> 00:17:34.530
and in fact you can represent other
00:17:34.530 --> 00:17:37.030
modalities as well, yeah.
00:17:37.660 --> 00:17:40.250
Like you said, broken, broken Example.
00:17:40.250 --> 00:17:41.950
Those two were just similar and now
00:17:41.950 --> 00:17:44.100
they'll be actually similar.
00:17:44.100 --> 00:17:45.140
That's right.
00:17:45.140 --> 00:17:47.040
So if you have words that are have
00:17:47.040 --> 00:17:48.930
similar meanings and similar sequences
00:17:48.930 --> 00:17:51.440
of strings, then they'll be like more
00:17:51.440 --> 00:17:52.610
similarly represented.
00:17:53.480 --> 00:17:54.910
What's the disadvantage of this
00:17:54.910 --> 00:17:55.610
approach?
00:17:59.510 --> 00:18:01.340
So the.
00:18:01.740 --> 00:18:03.810
So it's too long, so there's, so that's
00:18:03.810 --> 00:18:06.690
the main disadvantage and also that
00:18:06.690 --> 00:18:07.830
the.
00:18:08.600 --> 00:18:10.390
That I token by itself isn't very
00:18:10.390 --> 00:18:12.320
meaningful, which means that it takes a
00:18:12.320 --> 00:18:14.526
lot more like processing or kind of
00:18:14.526 --> 00:18:16.666
like understanding to make use of this
00:18:16.666 --> 00:18:17.800
kind of representation.
00:18:17.800 --> 00:18:20.030
So if I give you a document and I say
00:18:20.030 --> 00:18:22.251
it has like this many S's and this many
00:18:22.251 --> 00:18:24.218
A's and this many B's like you're going
00:18:24.218 --> 00:18:25.530
to, you're not going to have any idea
00:18:25.530 --> 00:18:26.808
what that means.
00:18:26.808 --> 00:18:29.570
And in general like you need to, then
00:18:29.570 --> 00:18:31.410
you need to consider like jointly
00:18:31.410 --> 00:18:33.180
sequences of characters in order to
00:18:33.180 --> 00:18:34.606
make any sense of it, which means that
00:18:34.606 --> 00:18:36.970
you need like a much more complicated
00:18:36.970 --> 00:18:38.720
kind of processing and representation.
00:18:40.180 --> 00:18:42.650
So I think everything was basically
00:18:42.650 --> 00:18:43.360
mentioned.
00:18:43.360 --> 00:18:46.420
Small Vocabulary, so you could probably
00:18:46.420 --> 00:18:48.925
do less than 100 integers, but at most
00:18:48.925 --> 00:18:52.190
256 simple you can represent any
00:18:52.190 --> 00:18:52.920
document.
00:18:55.210 --> 00:18:58.980
Similar words with will have similar
00:18:58.980 --> 00:19:01.140
sequences, but the count of tokens
00:19:01.140 --> 00:19:02.420
isn't meaningful, and the character
00:19:02.420 --> 00:19:03.380
sequences are long.
00:19:04.770 --> 00:19:06.310
So now finally we've reached the middle
00:19:06.310 --> 00:19:09.300
ground, which is Subword tokenization,
00:19:09.300 --> 00:19:11.980
so mapping each Subword to an integer.
00:19:12.220 --> 00:19:16.300
And so now basically it means that you
00:19:16.300 --> 00:19:20.210
map blocks of frequent characters to
00:19:20.210 --> 00:19:22.340
integers, but those don't necessarily
00:19:22.340 --> 00:19:23.633
need to be a full Word.
00:19:23.633 --> 00:19:25.840
That can be, and with something like
00:19:25.840 --> 00:19:27.290
centerpiece, they can even be more than
00:19:27.290 --> 00:19:29.420
one word that are commonly like put
00:19:29.420 --> 00:19:29.870
together.
00:19:31.430 --> 00:19:32.120
Question.
00:19:34.410 --> 00:19:35.770
If we were mapping based on.
00:19:39.320 --> 00:19:43.815
So at could be 1 integer and then CMB
00:19:43.815 --> 00:19:45.820
could would be another in that case.
00:19:46.700 --> 00:19:47.370
Like 1 Word.
00:19:49.130 --> 00:19:51.630
So Word gets like spread divided into
00:19:51.630 --> 00:19:53.300
multiple integers potentially.
00:19:55.470 --> 00:19:58.460
So for this you can model.
00:19:58.940 --> 00:20:01.490
You can again model any document
00:20:01.490 --> 00:20:03.640
because this will start out as just as
00:20:03.640 --> 00:20:05.220
the Byte representation and then you
00:20:05.220 --> 00:20:07.611
form like groups of bytes and you keep
00:20:07.611 --> 00:20:09.553
you keep like your leaf node
00:20:09.553 --> 00:20:11.507
Representations as well or your Byte
00:20:11.507 --> 00:20:11.830
Representations.
00:20:12.550 --> 00:20:15.220
So you can represent any Word this way,
00:20:15.220 --> 00:20:16.720
but then common words will be
00:20:16.720 --> 00:20:19.960
represented as whole integers and less
00:20:19.960 --> 00:20:22.030
common words will be broken up into a
00:20:22.030 --> 00:20:25.130
set of chunks and also like parts of
00:20:25.130 --> 00:20:26.340
different parts of speech.
00:20:26.340 --> 00:20:29.782
So jump jumped and jumps might be like
00:20:29.782 --> 00:20:33.420
jump followed by Ed or S or.
00:20:34.340 --> 00:20:37.110
Or just the end of the Word.
00:20:39.080 --> 00:20:41.220
Now, the only disadvantage of this is
00:20:41.220 --> 00:20:42.650
that you need to solve for the good
00:20:42.650 --> 00:20:44.290
Subword tokenization, and it's a little
00:20:44.290 --> 00:20:46.070
bit more complicated than just counting
00:20:46.070 --> 00:20:49.550
words or just using characters straight
00:20:49.550 --> 00:20:49.650
up.
00:20:51.500 --> 00:20:53.530
So if we compare these Representations,
00:20:53.530 --> 00:20:55.130
I'll talk about the algorithm for it in
00:20:55.130 --> 00:20:55.470
a minute.
00:20:55.470 --> 00:20:56.880
It's actually pretty simple Algorithm.
00:20:58.040 --> 00:20:59.480
So we if we look at these
00:20:59.480 --> 00:21:02.400
Representations, we can compare them in
00:21:02.400 --> 00:21:03.670
different ways.
00:21:03.670 --> 00:21:05.830
So first, just in terms of
00:21:05.830 --> 00:21:07.670
representation, if we take the chairs
00:21:07.670 --> 00:21:09.880
broken, the character representation
00:21:09.880 --> 00:21:11.480
will just divide it into all the
00:21:11.480 --> 00:21:12.390
characters.
00:21:12.390 --> 00:21:14.817
Subword might represent it as Ch.
00:21:14.817 --> 00:21:17.442
That means that there's Ch and
00:21:17.442 --> 00:21:18.446
something after it.
00:21:18.446 --> 00:21:20.126
And the compound error means that
00:21:20.126 --> 00:21:21.970
there's something before the air.
00:21:21.970 --> 00:21:24.440
So chair is broken.
00:21:25.480 --> 00:21:27.460
So it's divided into 4 sub words here.
00:21:28.100 --> 00:21:29.780
And the Word representation is just
00:21:29.780 --> 00:21:31.460
that you'd have a different integer for
00:21:31.460 --> 00:21:31.980
each word.
00:21:31.980 --> 00:21:33.466
So what I mean by these is that for
00:21:33.466 --> 00:21:34.940
each of these things, between a comma
00:21:34.940 --> 00:21:36.310
there would be a different integer.
00:21:38.100 --> 00:21:40.130
The Vocabulary Size for characters
00:21:40.130 --> 00:21:41.370
would be up to 256.
00:21:41.370 --> 00:21:44.222
For Sub words it typically be 4K to
00:21:44.222 --> 00:21:44.640
50K.
00:21:44.640 --> 00:21:47.390
So GPT for example is 50K.
00:21:48.390 --> 00:21:51.140
But if I remember is like 30 or 40K.
00:21:52.740 --> 00:21:55.670
And Word Representations.
00:21:56.380 --> 00:22:00.470
Are usually you do at least 30 K so
00:22:00.470 --> 00:22:04.700
generally like GPT being 50K is because
00:22:04.700 --> 00:22:05.800
they're trying to model all the
00:22:05.800 --> 00:22:07.050
languages in the world.
00:22:07.620 --> 00:22:10.010
And even if you're modeling English,
00:22:10.010 --> 00:22:11.620
you would usually need a Vocabulary of
00:22:11.620 --> 00:22:13.730
at least 30 K to do the Word
00:22:13.730 --> 00:22:14.710
representation.
00:22:15.400 --> 00:22:18.530
So the so the green means that it's an
00:22:18.530 --> 00:22:20.190
advantage, red is a disadvantage.
00:22:21.950 --> 00:22:23.500
Then if we look at the Completeness.
00:22:24.140 --> 00:22:27.310
So the character in Subword are perfect
00:22:27.310 --> 00:22:28.480
because they can represent all
00:22:28.480 --> 00:22:32.070
Language, but the word is the word
00:22:32.070 --> 00:22:33.320
representation is Incomplete.
00:22:33.320 --> 00:22:34.880
There'll be lots of things marked to
00:22:34.880 --> 00:22:35.820
map to unknown.
00:22:38.140 --> 00:22:39.570
If we think about the independent
00:22:39.570 --> 00:22:42.530
meaningfulness, the characters are bad
00:22:42.530 --> 00:22:44.200
because the letters by themselves don't
00:22:44.200 --> 00:22:45.040
mean anything.
00:22:45.040 --> 00:22:46.410
They're kind of like pixels.
00:22:46.410 --> 00:22:50.060
The Subword is pretty good, so often it
00:22:50.060 --> 00:22:53.000
will be mapped to like single words,
00:22:53.000 --> 00:22:54.710
but some words will be broken up.
00:22:55.460 --> 00:22:57.950
And the word is pretty good.
00:23:00.030 --> 00:23:01.820
And then if we look at sequence length
00:23:01.820 --> 00:23:03.710
then the characters gives you the
00:23:03.710 --> 00:23:05.570
longest sequence so it represent a
00:23:05.570 --> 00:23:06.140
document.
00:23:06.140 --> 00:23:08.780
It will be 1 integer per character.
00:23:08.780 --> 00:23:12.170
Sub words are medium on average when in
00:23:12.170 --> 00:23:14.860
practice people have about 1.4 tokens
00:23:14.860 --> 00:23:17.590
per Word, so many common words will be
00:23:17.590 --> 00:23:19.960
represented with a single token, but
00:23:19.960 --> 00:23:22.645
some words will be broken up and the
00:23:22.645 --> 00:23:26.130
word is the shortest sequence shorter
00:23:26.130 --> 00:23:28.030
than a Subword tokenization.
00:23:30.260 --> 00:23:32.740
And in terms of Encoding Word
00:23:32.740 --> 00:23:35.020
similarity, the characters encodes it
00:23:35.020 --> 00:23:35.610
somewhat.
00:23:35.610 --> 00:23:37.260
Subword I would say is a little better
00:23:37.260 --> 00:23:38.850
because something like broke and broken
00:23:38.850 --> 00:23:42.220
would be like the same, include the
00:23:42.220 --> 00:23:44.310
same Subword plus one more.
00:23:44.310 --> 00:23:46.887
So it's like a shorter way that encodes
00:23:46.887 --> 00:23:50.030
that common the common elements and the
00:23:50.030 --> 00:23:51.890
Word doesn't Encode it at all.
00:23:55.940 --> 00:24:00.520
Now let's see how we can how we can
00:24:00.520 --> 00:24:01.724
learn this Subword Tokenizer.
00:24:01.724 --> 00:24:04.236
So how do we break up chunks of
00:24:04.236 --> 00:24:04.550
characters?
00:24:04.550 --> 00:24:07.270
How do we break up like a text
00:24:07.270 --> 00:24:09.040
documents into chunks of characters so
00:24:09.040 --> 00:24:10.500
that we can represent each chunk with
00:24:10.500 --> 00:24:11.070
an integer?
00:24:11.860 --> 00:24:13.705
The algorithm is really simple.
00:24:13.705 --> 00:24:15.728
It's basically called byte pair
00:24:15.728 --> 00:24:18.800
encoding and all the other like Subword
00:24:18.800 --> 00:24:21.240
tokenization ONS are just like kind of
00:24:21.240 --> 00:24:23.120
tweaks on this idea.
00:24:23.120 --> 00:24:25.810
For example whether you whether you
00:24:25.810 --> 00:24:27.780
first like divide into words using
00:24:27.780 --> 00:24:29.810
spaces or whether you like force
00:24:29.810 --> 00:24:32.360
punctuation to be its own thing.
00:24:32.360 --> 00:24:34.750
But they all use despite pair Encoding.
00:24:34.750 --> 00:24:37.720
The basic idea is that you start with a
00:24:37.720 --> 00:24:39.490
character, assigned it to a unique
00:24:39.490 --> 00:24:41.730
token, so you'll start with 256.
00:24:41.790 --> 00:24:44.930
Tokens, and then you iteratively assign
00:24:44.930 --> 00:24:46.910
a token to the most common pair of
00:24:46.910 --> 00:24:49.490
consecutive tokens until you reach your
00:24:49.490 --> 00:24:50.540
maximum size.
00:24:51.310 --> 00:24:55.760
So as an example, if I have these if my
00:24:55.760 --> 00:24:57.720
initial array of characters is this
00:24:57.720 --> 00:24:59.580
aaabdaaabac?
00:24:59.630 --> 00:25:00.500
PAC.
00:25:01.340 --> 00:25:05.010
Then my most common pair is just a EI
00:25:05.010 --> 00:25:05.578
mean AA.
00:25:05.578 --> 00:25:09.190
So I just replace that by another new
00:25:09.190 --> 00:25:09.940
integer.
00:25:09.940 --> 00:25:11.610
I'm just Representing that by Z.
00:25:11.610 --> 00:25:14.390
So I say aha is.
00:25:14.390 --> 00:25:16.520
Now I'm going to replace all my ahas
00:25:16.520 --> 00:25:17.370
with Z's.
00:25:17.370 --> 00:25:19.498
So this AA becomes Z.
00:25:19.498 --> 00:25:21.199
This AA becomes Z.
00:25:22.670 --> 00:25:26.320
My most common pair is AB.
00:25:26.320 --> 00:25:27.650
There's two abs.
00:25:28.770 --> 00:25:30.300
And sometimes there's ties, and then
00:25:30.300 --> 00:25:31.800
you just like arbitrarily break the
00:25:31.800 --> 00:25:32.130
tie.
00:25:33.030 --> 00:25:35.720
But now I can Replace AB by Y and so
00:25:35.720 --> 00:25:38.186
now I say Y is equal to AB, Z is equal
00:25:38.186 --> 00:25:40.970
to A and I replace all the ABS by's.
00:25:42.290 --> 00:25:46.310
And then my most common pair is ZY and
00:25:46.310 --> 00:25:49.230
so I can replace Y by X and I say X
00:25:49.230 --> 00:25:53.040
equals ZY and now I have this like
00:25:53.040 --> 00:25:53.930
shorter sequence.
00:25:55.560 --> 00:25:56.860
So you can use this for.
00:25:56.860 --> 00:25:58.540
I think it was first proposed for
00:25:58.540 --> 00:26:01.360
justice compression, but here we
00:26:01.360 --> 00:26:02.470
actually want to.
00:26:02.470 --> 00:26:04.500
We care about this dictionary as like a
00:26:04.500 --> 00:26:05.780
way of representing the text.
00:26:08.190 --> 00:26:10.200
Question even though.
00:26:13.650 --> 00:26:15.540
Painter finding is not that easy.
00:26:22.850 --> 00:26:25.860
And it's actually even easier for
00:26:25.860 --> 00:26:28.470
computers I would say, because as
00:26:28.470 --> 00:26:30.030
humans we can do it with a character
00:26:30.030 --> 00:26:30.905
string this long.
00:26:30.905 --> 00:26:33.070
But if you have like a 100 billion
00:26:33.070 --> 00:26:34.790
length character string, then trying to
00:26:34.790 --> 00:26:37.450
find the most common 2 character pairs
00:26:37.450 --> 00:26:38.396
would be hard.
00:26:38.396 --> 00:26:40.605
But for I'll show you the algorithm in
00:26:40.605 --> 00:26:40.890
a minute.
00:26:40.890 --> 00:26:42.520
It's actually not complicated.
00:26:42.520 --> 00:26:44.850
You just iterate through the characters
00:26:44.850 --> 00:26:45.830
and you Count.
00:26:45.830 --> 00:26:48.340
You create a dictionary for each unique
00:26:48.340 --> 00:26:48.930
pair.
00:26:49.030 --> 00:26:51.370
And you count them and then you do
00:26:51.370 --> 00:26:52.910
argmax, so.
00:26:53.620 --> 00:26:57.590
And then so then if I have some string
00:26:57.590 --> 00:26:59.330
like this So what would this represent
00:26:59.330 --> 00:27:00.810
here X ZD?
00:27:11.190 --> 00:27:13.600
So first, what X maps to what?
00:27:15.140 --> 00:27:15.720
OK.
00:27:15.720 --> 00:27:20.850
And then, so I'll have zyzz D and then.
00:27:21.710 --> 00:27:22.550
So.
00:27:25.270 --> 00:27:28.160
Yeah, so right.
00:27:28.160 --> 00:27:28.630
Yep.
00:27:28.630 --> 00:27:32.622
So this becomes a, this becomes AB, and
00:27:32.622 --> 00:27:33.708
this becomes a.
00:27:33.708 --> 00:27:35.560
So AA BA ad.
00:27:36.900 --> 00:27:37.750
So it's easy.
00:27:37.750 --> 00:27:39.845
It's pretty fast to like go.
00:27:39.845 --> 00:27:42.040
So Learning this tokenization takes
00:27:42.040 --> 00:27:42.500
some time.
00:27:42.500 --> 00:27:44.840
But then once you have it, it's fast to
00:27:44.840 --> 00:27:46.690
map into it, and then it's also fast to
00:27:46.690 --> 00:27:48.300
decompress spec into the original
00:27:48.300 --> 00:27:48.870
characters.
00:27:51.560 --> 00:27:55.140
So this is the basic idea of what's
00:27:55.140 --> 00:27:57.210
called the WordPiece Tokenizer.
00:27:57.210 --> 00:28:00.455
So this was first proposed by Sennrich
00:28:00.455 --> 00:28:02.450
and then I think it was wood all that
00:28:02.450 --> 00:28:04.120
gave it the name WordPiece.
00:28:04.120 --> 00:28:05.896
But they just say we did what Sennrich
00:28:05.896 --> 00:28:08.390
did and these papers are both from
00:28:08.390 --> 00:28:09.300
2016.
00:28:10.750 --> 00:28:12.890
So this is like the Algorithm from the
00:28:12.890 --> 00:28:15.731
Sennrich paper, and basically the
00:28:15.731 --> 00:28:18.198
algorithm is just you just go.
00:28:18.198 --> 00:28:20.603
You have like this get stats, which is
00:28:20.603 --> 00:28:22.960
that you create a dictionary, a hash
00:28:22.960 --> 00:28:25.835
table of all pairs of characters.
00:28:25.835 --> 00:28:28.755
For each you go through for each word,
00:28:28.755 --> 00:28:31.572
and for each character and your Word
00:28:31.572 --> 00:28:34.610
you add account for each pair of your
00:28:34.610 --> 00:28:36.170
of characters that you see.
00:28:37.080 --> 00:28:39.720
And then you do best is you get the
00:28:39.720 --> 00:28:42.980
most common pair that you saw in your
00:28:42.980 --> 00:28:45.800
in your in your document or your set of
00:28:45.800 --> 00:28:46.400
documents.
00:28:47.120 --> 00:28:49.652
And then you Merge, which merging means
00:28:49.652 --> 00:28:50.740
that you Replace.
00:28:50.740 --> 00:28:53.180
Anytime you see that pair, you replace
00:28:53.180 --> 00:28:54.760
it by the new token.
00:28:55.800 --> 00:28:57.530
And.
00:28:58.340 --> 00:29:00.230
And then you repeat so then you keep on
00:29:00.230 --> 00:29:00.956
doing that.
00:29:00.956 --> 00:29:02.880
So it takes some time because you have
00:29:02.880 --> 00:29:04.600
to keep looping through your document
00:29:04.600 --> 00:29:07.580
for every single every token that you
00:29:07.580 --> 00:29:08.530
want to add.
00:29:08.530 --> 00:29:10.540
And it's not usually just by document.
00:29:10.540 --> 00:29:12.130
I don't mean that it's like a Word
00:29:12.130 --> 00:29:14.510
document, it's often Wikipedia or
00:29:14.510 --> 00:29:16.640
something like it's a big set of text.
00:29:17.550 --> 00:29:20.273
But it's a pretty simple algorithm, and
00:29:20.273 --> 00:29:22.989
it's not like you do it once and then
00:29:22.990 --> 00:29:24.280
you're done, and then other people can
00:29:24.280 --> 00:29:25.560
use this representation.
00:29:27.760 --> 00:29:28.510
So.
00:29:29.540 --> 00:29:31.060
So you can try it.
00:29:31.060 --> 00:29:33.750
So in this sentence, your cat cannot do
00:29:33.750 --> 00:29:35.130
the, can he?
00:29:36.100 --> 00:29:37.880
What is the?
00:29:37.880 --> 00:29:40.930
What's the first pair that you would
00:29:40.930 --> 00:29:42.450
add a new token for?
00:29:45.520 --> 00:29:47.485
So it could be CA.
00:29:47.485 --> 00:29:48.470
So then.
00:29:49.190 --> 00:29:50.810
So CA would.
00:29:50.810 --> 00:29:54.710
Then I would Replace say CA by I'll
00:29:54.710 --> 00:29:54.920
just.
00:29:55.800 --> 00:29:56.390
Whoops.
00:29:58.190 --> 00:30:00.300
It didn't do any advance, but I'll type
00:30:00.300 --> 00:30:00.680
it here.
00:30:01.710 --> 00:30:03.600
So let's try CAI think there's actually
00:30:03.600 --> 00:30:05.270
2 correct answers, but.
00:30:06.400 --> 00:30:09.890
XTX Xnnot.
00:30:10.680 --> 00:30:15.543
Do the X and this is a little tricky.
00:30:15.543 --> 00:30:17.630
This would it depends how you delimit,
00:30:17.630 --> 00:30:20.660
but assuming that if you do not if you
00:30:20.660 --> 00:30:23.030
delimited by spaces then this would
00:30:23.030 --> 00:30:25.240
this would be fine.
00:30:26.300 --> 00:30:30.130
So here's the here's the did you say
00:30:30.130 --> 00:30:30.680
CA?
00:30:30.680 --> 00:30:31.260
Is that what you said?
00:30:32.780 --> 00:30:35.050
OK, so it depends on.
00:30:35.050 --> 00:30:35.973
It depends.
00:30:35.973 --> 00:30:37.819
So often I forgot.
00:30:37.820 --> 00:30:42.490
One detail that I forgot is that you
00:30:42.490 --> 00:30:45.176
often represent like the start of the
00:30:45.176 --> 00:30:46.370
word or the end of the Word.
00:30:47.250 --> 00:30:50.210
So in the case of like Jet makers, feud
00:30:50.210 --> 00:30:52.240
over seat width with big orders at
00:30:52.240 --> 00:30:52.640
stake.
00:30:53.500 --> 00:30:56.700
It breaks it up into J, but this
00:30:56.700 --> 00:30:58.895
leading character means that J is at
00:30:58.895 --> 00:31:00.127
the start of a Word.
00:31:00.127 --> 00:31:01.950
So you represent like whether a letter
00:31:01.950 --> 00:31:03.480
is at the start of a word or not, so
00:31:03.480 --> 00:31:05.140
that you can tell whether you're like
00:31:05.140 --> 00:31:06.200
going into a new Word.
00:31:07.450 --> 00:31:10.980
So this is like JT under score makers,
00:31:10.980 --> 00:31:13.565
so that's like a whole Word and then
00:31:13.565 --> 00:31:16.840
under score FEUD under score over under
00:31:16.840 --> 00:31:19.060
score, C width and then all these
00:31:19.060 --> 00:31:20.680
common words get their own tokens.
00:31:21.820 --> 00:31:23.930
And so in this case, if we do it the
00:31:23.930 --> 00:31:26.450
same way then basically.
00:31:27.750 --> 00:31:30.150
We start with the first thing that we
00:31:30.150 --> 00:31:30.540
do.
00:31:31.770 --> 00:31:34.010
Is basically do this.
00:31:36.730 --> 00:31:40.570
Cannot do the can.
00:31:41.520 --> 00:31:43.190
Can he?
00:31:48.010 --> 00:31:48.670
And.
00:31:51.490 --> 00:31:51.860
Yeah.
00:31:56.740 --> 00:32:00.210
So let's do it one more time.
00:32:00.210 --> 00:32:00.963
So let's.
00:32:00.963 --> 00:32:02.790
So what is the most common pair now?
00:32:07.980 --> 00:32:16.210
Let's see, 123451234 OK yeah, it is.
00:32:17.010 --> 00:32:17.920
So now I get.
00:32:18.710 --> 00:32:19.660
Your.
00:32:23.010 --> 00:32:23.600
Ynot.
00:32:29.380 --> 00:32:31.640
It's a special bike, yeah.
00:32:31.640 --> 00:32:33.720
So it's like its own bite to represent
00:32:33.720 --> 00:32:33.970
that.
00:32:33.970 --> 00:32:35.950
Like this is basically start of Word.
00:32:36.580 --> 00:32:38.520
And then you get into the.
00:32:39.310 --> 00:32:41.055
And then you get into the characters of
00:32:41.055 --> 00:32:41.480
the Word.
00:32:49.180 --> 00:32:51.220
We have the special bite and then the.
00:32:52.480 --> 00:32:54.800
This could be a group, yeah?
00:32:54.800 --> 00:32:56.810
So you could Merge that like a Word
00:32:56.810 --> 00:32:58.510
often starts with this, essentially.
00:33:01.420 --> 00:33:04.310
Yeah, X and where?
00:33:06.100 --> 00:33:06.520
So for.
00:33:07.940 --> 00:33:09.240
Yeah.
00:33:09.240 --> 00:33:10.045
What did I mean?
00:33:10.045 --> 00:33:10.440
Yeah.
00:33:10.440 --> 00:33:10.810
Thanks.
00:33:11.570 --> 00:33:11.950
That's.
00:33:13.670 --> 00:33:14.180
OK, good.
00:33:14.910 --> 00:33:16.450
Alright, so what's the next one?
00:33:20.950 --> 00:33:24.330
So let's see, there's 1234.
00:33:27.930 --> 00:33:31.250
I think yeah, so it's a tie, right?
00:33:31.250 --> 00:33:32.290
But let's go with XN.
00:33:32.290 --> 00:33:34.610
So it looks like it could be Xnnot or
00:33:34.610 --> 00:33:35.930
it could be under score X.
00:33:36.790 --> 00:33:39.860
But let's do XN and I'll call it a.
00:33:41.300 --> 00:33:46.130
Y so Y not do the.
00:33:47.810 --> 00:33:48.450
Why?
00:33:52.780 --> 00:33:53.650
Yeah.
00:33:55.280 --> 00:33:55.780
Early morning.
00:33:59.040 --> 00:34:00.210
Why?
00:34:01.950 --> 00:34:06.600
Why he OK?
00:34:07.950 --> 00:34:09.530
So I think you get the idea.
00:34:09.530 --> 00:34:10.740
And then you keep doing that.
00:34:10.740 --> 00:34:12.140
Now I'm just doing it with one
00:34:12.140 --> 00:34:14.023
sentence, but again, you'd usually be
00:34:14.023 --> 00:34:16.929
doing it with like a GB of text, so it
00:34:16.930 --> 00:34:19.059
wouldn't be like so.
00:34:20.320 --> 00:34:20.970
Contrived.
00:34:25.260 --> 00:34:26.110
So the main.
00:34:26.190 --> 00:34:30.865
The so this is the basic Algorithm.
00:34:30.865 --> 00:34:32.880
In practice there's often like tweaks
00:34:32.880 --> 00:34:33.260
to it.
00:34:33.260 --> 00:34:35.560
So for example in cent piece you allow
00:34:35.560 --> 00:34:37.720
like different words to be merged
00:34:37.720 --> 00:34:39.770
together because you don't like
00:34:39.770 --> 00:34:41.200
delimited by spaces first.
00:34:42.160 --> 00:34:44.290
Often another thing that's often done
00:34:44.290 --> 00:34:46.700
is that you say that the punctuation
00:34:46.700 --> 00:34:49.560
has to be separate, so that dog with a
00:34:49.560 --> 00:34:51.630
period, dog with a question mark, and
00:34:51.630 --> 00:34:53.350
dog with a comma don't get all mapped
00:34:53.350 --> 00:34:54.330
to different tokens.
00:34:55.200 --> 00:34:58.410
And then there's also sometimes people
00:34:58.410 --> 00:35:00.020
do like a forward and backward pass.
00:35:00.020 --> 00:35:01.110
See Merge.
00:35:02.160 --> 00:35:03.690
And then you.
00:35:04.530 --> 00:35:06.480
And then you can check to see it's a
00:35:06.480 --> 00:35:08.230
greedy algorithm, so you can check to
00:35:08.230 --> 00:35:10.040
see if like some pairs are like not
00:35:10.040 --> 00:35:12.260
needed anymore, and then you can remove
00:35:12.260 --> 00:35:14.060
them and then fill in those tokens.
00:35:14.060 --> 00:35:15.930
But those are all kind of like
00:35:15.930 --> 00:35:17.720
implementation details that are not too
00:35:17.720 --> 00:35:19.240
critical question.
00:35:21.880 --> 00:35:25.360
So if you start with 256 and then
00:35:25.360 --> 00:35:27.590
you're going up to 40,000 for example,
00:35:27.590 --> 00:35:28.940
then it would be almost 40,000.
00:35:28.940 --> 00:35:30.020
So it would be.
00:35:30.020 --> 00:35:32.010
You'd do it for everyone, Merge for
00:35:32.010 --> 00:35:33.200
every new token that you need to
00:35:33.200 --> 00:35:33.730
create.
00:35:35.160 --> 00:35:38.220
So for in the literature, typically
00:35:38.220 --> 00:35:40.600
it's like 30,000 to 50,000 is your
00:35:40.600 --> 00:35:41.770
dictionary Size.
00:35:42.970 --> 00:35:43.340
00:35:49.210 --> 00:35:51.000
So that's Subword tokenization.
00:35:51.000 --> 00:35:54.036
So you map groups of characters in
00:35:54.036 --> 00:35:54.830
integers so that.
00:35:54.830 --> 00:35:56.430
Then you can represent any text as a
00:35:56.430 --> 00:35:58.180
sequence of integers, and the text is
00:35:58.180 --> 00:35:59.030
fully represented.
00:35:59.800 --> 00:36:01.240
But there still isn't like a really
00:36:01.240 --> 00:36:02.450
good way to represent the code
00:36:02.450 --> 00:36:03.310
similarity, right?
00:36:03.310 --> 00:36:04.810
Because you'll end up with like these
00:36:04.810 --> 00:36:08.150
30,000 or 40,000 integers, and there's
00:36:08.150 --> 00:36:09.920
no like relationship between those
00:36:09.920 --> 00:36:10.197
integers.
00:36:10.197 --> 00:36:12.610
Like 4 is not anymore similar to five
00:36:12.610 --> 00:36:14.760
than it is to like 6000.
00:36:16.220 --> 00:36:19.402
So the next idea to try to Encode the
00:36:19.402 --> 00:36:21.802
meaning of the word in a continuous
00:36:21.802 --> 00:36:24.230
Vector, or the meaning of Subword in a
00:36:24.230 --> 00:36:24.850
continuous Vector.
00:36:24.850 --> 00:36:26.530
So first we'll just look at it in terms
00:36:26.530 --> 00:36:27.010
of words.
00:36:28.940 --> 00:36:31.290
And the main idea is to try to learn
00:36:31.290 --> 00:36:33.170
these vectors based on the surrounding
00:36:33.170 --> 00:36:33.700
words.
00:36:34.970 --> 00:36:38.760
So this actually this idea first became
00:36:38.760 --> 00:36:41.120
popular before Subword tokenization
00:36:41.120 --> 00:36:42.233
became popular.
00:36:42.233 --> 00:36:44.540
So it was just operating on full words.
00:36:44.540 --> 00:36:47.160
And one of the key papers is this paper
00:36:47.160 --> 00:36:47.970
Word2Vec.
00:36:48.740 --> 00:36:53.440
And in Word2Vec, for each word you
00:36:53.440 --> 00:36:55.020
solve for some kind of continuous
00:36:55.020 --> 00:36:57.110
representation and they have two
00:36:57.110 --> 00:36:58.160
different ways to do that.
00:36:58.160 --> 00:37:00.065
Essentially you're just trying to
00:37:00.065 --> 00:37:02.700
predict either predict the given like
00:37:02.700 --> 00:37:04.054
say 5 words in a row.
00:37:04.054 --> 00:37:05.913
You either predict the center Word
00:37:05.913 --> 00:37:07.280
given the surrounding words.
00:37:07.990 --> 00:37:09.940
Or you try to predict the surrounding
00:37:09.940 --> 00:37:11.460
words given the center words.
00:37:12.460 --> 00:37:15.660
So in this one, the center bag of words
00:37:15.660 --> 00:37:18.390
I think it is you say that the.
00:37:18.970 --> 00:37:20.949
That you've got first you've got for
00:37:20.950 --> 00:37:23.650
each integer you have some projection
00:37:23.650 --> 00:37:25.980
into for example 100 dimensional
00:37:25.980 --> 00:37:26.460
Vector.
00:37:27.550 --> 00:37:29.940
And then you take your training set and
00:37:29.940 --> 00:37:32.417
divide it up into sets of five words.
00:37:32.417 --> 00:37:35.430
If your window size is, if your T is 2,
00:37:35.430 --> 00:37:37.633
so that your window size is 5.
00:37:37.633 --> 00:37:40.290
And then you say that the center Word
00:37:40.290 --> 00:37:44.930
should be as close to the sum of the
00:37:44.930 --> 00:37:47.380
surrounding words as possible in this
00:37:47.380 --> 00:37:48.580
Vector representation.
00:37:49.260 --> 00:37:51.170
And then you just do like some kind of
00:37:51.170 --> 00:37:53.026
like I think these RMS prop, but some
00:37:53.026 --> 00:37:54.290
kind of like gradient descent,
00:37:54.290 --> 00:37:56.410
subgradient descent to optimize your
00:37:56.410 --> 00:37:58.250
Word vectors under this constraint.
00:37:59.740 --> 00:38:01.684
The other method is called Skip gram.
00:38:01.684 --> 00:38:04.960
So in Skip gram you also divide your
00:38:04.960 --> 00:38:07.390
document into like series of some
00:38:07.390 --> 00:38:08.995
number of words.
00:38:08.995 --> 00:38:11.299
But instead you and again you like
00:38:11.300 --> 00:38:12.636
project your center Word.
00:38:12.636 --> 00:38:14.473
But then based on that projection you
00:38:14.473 --> 00:38:16.200
then again have a linear mapping to
00:38:16.200 --> 00:38:18.300
each of your other words that tries to
00:38:18.300 --> 00:38:20.841
predict which other Word comes, like 2
00:38:20.841 --> 00:38:23.479
tokens before 1 token before or one
00:38:23.479 --> 00:38:25.125
token after two tokens after.
00:38:25.125 --> 00:38:26.772
So this is Skip gram.
00:38:26.772 --> 00:38:28.910
So in this case the average of the
00:38:28.910 --> 00:38:30.430
surrounding words in your Vector
00:38:30.430 --> 00:38:32.220
representation should be the same as
00:38:32.220 --> 00:38:32.970
the center Word.
00:38:33.380 --> 00:38:35.010
In this case, the center Word should
00:38:35.010 --> 00:38:37.340
predict what the linear model after
00:38:37.340 --> 00:38:39.130
it's projected into this Vector
00:38:39.130 --> 00:38:40.155
representation.
00:38:40.155 --> 00:38:43.860
The surrounding words and this can also
00:38:43.860 --> 00:38:46.390
be solved using subgradient descent.
00:38:46.390 --> 00:38:50.150
So these take on the order of like a
00:38:50.150 --> 00:38:51.940
day or two to process typically.
00:38:53.840 --> 00:38:54.420
Question.
00:38:55.570 --> 00:38:55.940
Different.
00:38:58.210 --> 00:38:59.490
Complete their values.
00:39:00.640 --> 00:39:02.710
For example like Skip crap like we have
00:39:02.710 --> 00:39:03.110
to work.
00:39:03.110 --> 00:39:04.210
I don't know, like table.
00:39:05.590 --> 00:39:07.420
The first sentence like the last
00:39:07.420 --> 00:39:09.360
sentence but the probability of the
00:39:09.360 --> 00:39:10.770
Word surrounding the Word table in the.
00:39:12.570 --> 00:39:13.270
It would be different.
00:39:16.510 --> 00:39:17.716
Right.
00:39:17.716 --> 00:39:20.129
Yeah, right.
00:39:20.130 --> 00:39:21.400
So the probability of a Word would
00:39:21.400 --> 00:39:26.310
depend on its on its neighbors and what
00:39:26.310 --> 00:39:28.050
we're trying to do here is we're trying
00:39:28.050 --> 00:39:29.190
to essentially.
00:39:30.060 --> 00:39:31.980
Under that model, where we say that the
00:39:31.980 --> 00:39:33.950
probability of a Word depend depends on
00:39:33.950 --> 00:39:36.140
its neighbors, we're trying to find the
00:39:36.140 --> 00:39:38.130
best continuous representation of each
00:39:38.130 --> 00:39:40.630
word so that likelihood is maximized
00:39:40.630 --> 00:39:42.700
for our training documents.
00:39:48.610 --> 00:39:50.820
So at the end of this you replace each
00:39:50.820 --> 00:39:53.000
word by some fixed length continuous
00:39:53.000 --> 00:39:53.775
Vector.
00:39:53.775 --> 00:39:56.590
And these vectors can predict Word
00:39:56.590 --> 00:39:59.850
relationships, so they test like the
00:39:59.850 --> 00:40:04.050
ability to predict different kinds of
00:40:04.050 --> 00:40:06.085
words based on surrounding text or to
00:40:06.085 --> 00:40:08.230
predict different analogies using this
00:40:08.230 --> 00:40:09.320
representation.
00:40:09.320 --> 00:40:12.710
So it's best to just go ahead and share
00:40:12.710 --> 00:40:13.270
that.
00:40:13.430 --> 00:40:16.280
So what I mean is that for example if
00:40:16.280 --> 00:40:18.865
you take the Word mappings, this is for
00:40:18.865 --> 00:40:22.330
300 dimensional Word vectors for like
00:40:22.330 --> 00:40:23.670
for these different words here.
00:40:24.440 --> 00:40:28.810
And you mathematically do pairs minus
00:40:28.810 --> 00:40:31.350
France plus Italy, then the closest
00:40:31.350 --> 00:40:32.910
Word to that will be Rome.
00:40:33.900 --> 00:40:37.320
Or if you do Paris, if you do Paris
00:40:37.320 --> 00:40:39.806
minus France plus Japan, then the
00:40:39.806 --> 00:40:41.570
closest Word will be Tokyo.
00:40:41.570 --> 00:40:44.119
Or if you do pairs minus France plus
00:40:44.120 --> 00:40:46.459
Florida, then the closest Word will be
00:40:46.460 --> 00:40:47.300
Tallahassee.
00:40:48.690 --> 00:40:50.550
And it works with lots of things.
00:40:50.550 --> 00:40:51.270
So if you do like.
00:40:52.070 --> 00:40:54.889
CU minus copper plus zinc, then you get
00:40:54.890 --> 00:40:55.600
ZN.
00:40:55.600 --> 00:40:58.470
Or if you do France minus Sarkozy plus
00:40:58.470 --> 00:41:00.070
Berlusconi, you get Italy.
00:41:00.770 --> 00:41:02.230
Or Einstein.
00:41:02.230 --> 00:41:04.470
Scientist minus Einstein, plus Messi,
00:41:04.470 --> 00:41:05.560
you get midfielder.
00:41:06.840 --> 00:41:07.920
So it learns.
00:41:07.920 --> 00:41:10.200
It learns the relationships of these
00:41:10.200 --> 00:41:12.230
words in an additive way.
00:41:13.690 --> 00:41:17.860
And there's a cool demo here that I'll
00:41:17.860 --> 00:41:18.085
show.
00:41:18.085 --> 00:41:20.070
There's actually 2 demos, one of them
00:41:20.070 --> 00:41:21.300
I'm not going to do in class.
00:41:22.190 --> 00:41:24.950
But it kind of like explains more like
00:41:24.950 --> 00:41:26.950
how you train the Word Tyvek
00:41:26.950 --> 00:41:27.810
representation.
00:41:28.730 --> 00:41:29.720
And then?
00:41:34.200 --> 00:41:36.240
This one is.
00:41:40.770 --> 00:41:42.550
This one is like visualizing.
00:41:56.170 --> 00:41:58.340
It takes a little bit of time to
00:41:58.340 --> 00:41:59.360
download the model.
00:42:02.190 --> 00:42:04.300
So it's going to show is like.
00:42:04.300 --> 00:42:07.030
It's gonna show initially some set of
00:42:07.030 --> 00:42:09.140
words like represented in a 3D space.
00:42:09.140 --> 00:42:12.350
They projected the Word2Vec dimensions
00:42:12.350 --> 00:42:13.770
down into 3 axes.
00:42:14.690 --> 00:42:16.990
And then you can add additional words
00:42:16.990 --> 00:42:19.410
in that space and see where they lie
00:42:19.410 --> 00:42:22.250
compared to other words, and you can
00:42:22.250 --> 00:42:24.530
create your own analogies.
00:42:36.930 --> 00:42:38.830
So I was going to take a 2 minute break
00:42:38.830 --> 00:42:40.766
after showing you this, but let's take
00:42:40.766 --> 00:42:42.750
a 2 minute break now, and then I'll
00:42:42.750 --> 00:42:44.220
show it to you and you can try to think
00:42:44.220 --> 00:42:45.710
of analogies that you might want to
00:42:45.710 --> 00:42:46.260
see.
00:42:47.790 --> 00:42:50.020
It took like a couple minutes last time
00:42:50.020 --> 00:42:51.630
I downloaded it on my phone, but.
00:42:52.800 --> 00:42:53.980
Maybe it always takes a couple of
00:42:53.980 --> 00:42:54.430
minutes.
00:42:56.130 --> 00:42:58.110
Count ask something involved a back
00:42:58.110 --> 00:42:59.950
that propagate in the.
00:43:01.080 --> 00:43:05.510
In the multi layer perception, I want
00:43:05.510 --> 00:43:07.930
to copy some of functioning of torch.
00:43:07.930 --> 00:43:13.810
So Forward is just putting the vectors
00:43:13.810 --> 00:43:18.480
into the into the weights and getting
00:43:18.480 --> 00:43:22.790
the result by sequential step so.
00:43:23.730 --> 00:43:26.070
How to how do we backtrack some
00:43:26.070 --> 00:43:30.270
something from the Result so the loss
00:43:30.270 --> 00:43:33.350
and update the weight into the
00:43:33.350 --> 00:43:34.120
previously?
00:43:36.040 --> 00:43:36.580
Typically.
00:43:38.790 --> 00:43:40.480
And so the.
00:43:41.080 --> 00:43:46.110
So mathematically it's by the chain
00:43:46.110 --> 00:43:47.510
rule and the partial derivatives.
00:43:49.550 --> 00:43:52.540
Algorithmically, you.
00:43:52.980 --> 00:43:55.300
You compute the gradient for each
00:43:55.300 --> 00:43:57.680
previous layer, and then the gradient
00:43:57.680 --> 00:43:59.995
for the layer below that will be like
00:43:59.995 --> 00:44:01.760
the gradient of the subsequent layer
00:44:01.760 --> 00:44:05.110
times like how much that each weight
00:44:05.110 --> 00:44:06.409
influence that gradient.
00:44:08.090 --> 00:44:08.780
00:44:10.890 --> 00:44:13.510
I go ahead and I can't draw it that
00:44:13.510 --> 00:44:14.060
quickly, but.
00:44:15.780 --> 00:44:17.770
Attention 20 like that just means.
00:44:21.840 --> 00:44:22.750
Simple models.
00:44:22.750 --> 00:44:24.030
So let.
00:44:24.030 --> 00:44:26.430
Let's say that we use Radu.
00:44:26.430 --> 00:44:28.320
So this will.
00:44:28.320 --> 00:44:30.350
This will work as.
00:44:31.720 --> 00:44:32.670
Loss here.
00:44:33.420 --> 00:44:36.310
And how can we update the weight here?
00:44:38.000 --> 00:44:39.070
This process.
00:44:39.480 --> 00:44:40.340
And.
00:44:41.080 --> 00:44:41.530
Kind of.
00:44:41.860 --> 00:44:42.390
Connected.
00:44:44.240 --> 00:44:44.610
90.
00:44:45.310 --> 00:44:47.690
So here you would compute the gradient
00:44:47.690 --> 00:44:52.370
of the error with respect to the output
00:44:52.370 --> 00:44:54.976
here and then you can compute the
00:44:54.976 --> 00:44:56.676
partial derivative of that with respect
00:44:56.676 --> 00:44:57.567
to this weight.
00:44:57.567 --> 00:45:00.540
And then you take like you subtract
00:45:00.540 --> 00:45:02.450
that partial derivative time some step
00:45:02.450 --> 00:45:03.540
size from the weight.
00:45:05.410 --> 00:45:07.489
I'm sorry, this usually takes like 20
00:45:07.490 --> 00:45:09.360
or 30 minutes to explain, so it's hard
00:45:09.360 --> 00:45:11.790
to cover it really quickly.
00:45:11.790 --> 00:45:13.896
That's OK alright, I'm going to let
00:45:13.896 --> 00:45:16.220
this keep Downloading I guess, and go
00:45:16.220 --> 00:45:17.603
on and I'll try to keep.
00:45:17.603 --> 00:45:18.737
I'll come back to it.
00:45:18.737 --> 00:45:20.130
I don't know, it's taking even longer
00:45:20.130 --> 00:45:21.555
than it took on my phone, but it did.
00:45:21.555 --> 00:45:23.160
I did test that it works, so hopefully
00:45:23.160 --> 00:45:25.770
it will wait, there's something.
00:45:25.770 --> 00:45:27.370
No, it's still downloading the model.
00:45:27.370 --> 00:45:29.060
Let me just check.
00:45:31.390 --> 00:45:32.140
Relationship.
00:45:35.210 --> 00:45:36.340
That doesn't look good.
00:45:38.470 --> 00:45:40.020
Let's try refreshing.
00:45:42.200 --> 00:45:43.850
No, come on.
00:45:47.890 --> 00:45:48.820
Maybe if I go here.
00:45:49.520 --> 00:45:50.270
12.
00:45:51.890 --> 00:45:53.860
No, that's not what I wanted.
00:45:55.190 --> 00:45:56.470
Let me Experiments.
00:46:06.930 --> 00:46:08.150
No, I used.
00:46:08.150 --> 00:46:10.300
I did use my phone before.
00:46:12.680 --> 00:46:14.810
You can try it if I have one.
00:46:18.790 --> 00:46:21.320
If I cannot get this to work soon, I
00:46:21.320 --> 00:46:21.910
will just.
00:46:23.260 --> 00:46:23.980
Move on.
00:46:50.910 --> 00:46:51.720
That doesn't look good.
00:46:51.720 --> 00:46:54.300
OK, I think I might have to move on.
00:46:54.300 --> 00:46:55.730
Sorry, I'm not sure why it's not
00:46:55.730 --> 00:46:56.340
working.
00:47:00.760 --> 00:47:03.200
But maybe you can get that to work.
00:47:03.200 --> 00:47:05.097
Basically it shows all the points and
00:47:05.097 --> 00:47:07.112
then you can add additional words and
00:47:07.112 --> 00:47:08.337
then you can create your own analogies
00:47:08.337 --> 00:47:10.830
and then it will show like the vectors
00:47:10.830 --> 00:47:13.620
and show how it compared how it which
00:47:13.620 --> 00:47:15.080
word is most similar in your
00:47:15.080 --> 00:47:15.690
dictionary.
00:47:17.090 --> 00:47:20.926
So the so the really amazing thing
00:47:20.926 --> 00:47:23.642
about this to me is that it's just like
00:47:23.642 --> 00:47:25.200
the idea about thinking of words in a
00:47:25.200 --> 00:47:27.352
continuous space instead of thinking of
00:47:27.352 --> 00:47:28.860
words as like a discrete thing.
00:47:28.860 --> 00:47:31.140
And the idea that you can like add and
00:47:31.140 --> 00:47:33.000
subtract and perform mathematical
00:47:33.000 --> 00:47:34.950
operations on words and then it makes
00:47:34.950 --> 00:47:37.140
sense like that the corresponding, it
00:47:37.140 --> 00:47:39.050
performs analogies that way like pretty
00:47:39.050 --> 00:47:39.770
accurately.
00:47:39.770 --> 00:47:42.650
That was all like kind of crazy.
00:47:42.650 --> 00:47:46.420
And so this Word2Vec representation or.
00:47:46.470 --> 00:47:48.980
And similar kinds of Word embeddings
00:47:48.980 --> 00:47:51.136
represent that language is really in a
00:47:51.136 --> 00:47:52.010
continuous space.
00:47:52.010 --> 00:47:53.955
Words don't have a discrete meaning,
00:47:53.955 --> 00:47:55.500
they actually have like a lot of
00:47:55.500 --> 00:47:56.200
different meanings.
00:47:56.200 --> 00:47:58.260
And they have like some similarity to
00:47:58.260 --> 00:48:02.125
other words and differences in that
00:48:02.125 --> 00:48:03.820
words can be combined to mean new
00:48:03.820 --> 00:48:04.490
things.
00:48:04.490 --> 00:48:06.490
And so all of this is represented
00:48:06.490 --> 00:48:09.245
mathematically just by mapping the
00:48:09.245 --> 00:48:10.920
words into these big continuous
00:48:10.920 --> 00:48:11.790
vectors.
00:48:11.790 --> 00:48:15.290
So in such a way that you can either
00:48:15.290 --> 00:48:17.150
predict words by averaging.
00:48:17.210 --> 00:48:18.710
The surrounding words, or you can
00:48:18.710 --> 00:48:20.550
predict them through linear models.
00:48:26.460 --> 00:48:27.990
So.
00:48:28.110 --> 00:48:33.065
So it's like they trained the model on
00:48:33.065 --> 00:48:34.660
783,000,000 words with the other
00:48:34.660 --> 00:48:35.050
dimension.
00:48:39.360 --> 00:48:40.910
Like where these books that they were
00:48:40.910 --> 00:48:41.800
training on or?
00:48:43.020 --> 00:48:46.569
It may be I forget now, but it may have
00:48:46.569 --> 00:48:47.864
been the books corpus.
00:48:47.864 --> 00:48:49.830
The books corpus is one thing that's
00:48:49.830 --> 00:48:52.584
commonly used, which is just a bunch of
00:48:52.584 --> 00:48:54.980
books, but some there's a lot of
00:48:54.980 --> 00:48:56.922
there's a bunch of big repositories of
00:48:56.922 --> 00:48:59.270
data, like the Wall Street Journal
00:48:59.270 --> 00:49:01.410
books, Wikipedia.
00:49:01.410 --> 00:49:04.213
So there's like a lot of data sets that
00:49:04.213 --> 00:49:06.920
have been created and like packaged up
00:49:06.920 --> 00:49:08.189
nicely for this kind of thing.
00:49:11.520 --> 00:49:12.140
Question.
00:49:13.530 --> 00:49:14.680
You got it open.
00:49:14.680 --> 00:49:15.600
OK, cool.
00:49:17.380 --> 00:49:21.070
Do you are you able to connect to HDMI?
00:49:34.070 --> 00:49:34.640
All right.
00:49:39.100 --> 00:49:40.080
Alright, thanks.
00:49:40.080 --> 00:49:43.560
So yeah, so you can see that it's
00:49:43.560 --> 00:49:46.400
Representing like it's a bunch of
00:49:46.400 --> 00:49:49.980
mainly mother, wife, husband, daughter,
00:49:49.980 --> 00:49:51.320
Princess, so.
00:49:52.030 --> 00:49:53.360
Different genders.
00:49:53.360 --> 00:49:56.070
They're plotting it on gender, age and
00:49:56.070 --> 00:49:56.700
residual.
00:49:56.700 --> 00:49:57.610
So a difference.
00:49:58.910 --> 00:50:01.030
Another third Vector that everything
00:50:01.030 --> 00:50:03.925
else projects into, and then things
00:50:03.925 --> 00:50:06.510
like chair and computer which are just
00:50:06.510 --> 00:50:07.340
purely residual.
00:50:08.290 --> 00:50:10.180
And then you can see the actual Vector
00:50:10.180 --> 00:50:11.800
Representations here.
00:50:11.800 --> 00:50:14.290
So it's like 300 dimensional Word2Vec
00:50:14.290 --> 00:50:14.840
vectors.
00:50:15.690 --> 00:50:19.150
And then you can add words.
00:50:19.150 --> 00:50:22.060
So for example if I do.
00:50:23.030 --> 00:50:25.260
I can add dog.
00:50:26.830 --> 00:50:28.010
And puppy.
00:50:31.940 --> 00:50:33.280
And then if I.
00:50:34.170 --> 00:50:35.160
Do.
00:50:37.460 --> 00:50:38.420
Scroll down a little bit.
00:50:45.540 --> 00:50:48.240
So if I do for example.
00:50:49.550 --> 00:50:50.410
So this where I type.
00:50:50.410 --> 00:50:50.880
Yeah.
00:50:50.880 --> 00:50:53.840
So if I say man is to.
00:50:56.220 --> 00:50:57.160
Boy.
00:50:58.200 --> 00:51:00.340
As dog is to.
00:51:03.330 --> 00:51:05.690
And I think I understand where I
00:51:05.690 --> 00:51:07.430
pressed submit.
00:51:07.430 --> 00:51:09.040
Come on, don't be such a pain.
00:51:09.040 --> 00:51:10.740
There it goes.
00:51:10.740 --> 00:51:11.456
Just took awhile.
00:51:11.456 --> 00:51:13.550
So then so I said, man is the boy as
00:51:13.550 --> 00:51:14.260
dog is to.
00:51:14.260 --> 00:51:16.510
And then it comes out with puppy and
00:51:16.510 --> 00:51:17.975
you can see the vectors here.
00:51:17.975 --> 00:51:18.350
So.
00:51:19.190 --> 00:51:22.720
Man, the Vector of man to boy is being
00:51:22.720 --> 00:51:23.870
added to dog.
00:51:24.740 --> 00:51:28.570
And then that comes out OK.
00:51:29.370 --> 00:51:32.130
That comes out pretty close to puppy
00:51:32.130 --> 00:51:32.720
this.
00:51:34.440 --> 00:51:36.530
This site seems to be like having some
00:51:36.530 --> 00:51:37.350
problems today.
00:51:37.350 --> 00:51:38.650
It's just kind of slow.
00:51:40.210 --> 00:51:40.580
Data.
00:51:42.520 --> 00:51:44.440
So does anyone else have one to try
00:51:44.440 --> 00:51:45.659
that you'd like me to try?
00:51:46.700 --> 00:51:49.590
This is say, there we go.
00:51:49.590 --> 00:51:53.170
It's just like very having problems.
00:51:55.360 --> 00:51:56.850
I can try one more though, if somebody
00:51:56.850 --> 00:51:57.850
has one.
00:51:57.850 --> 00:51:59.475
Does anyone else have a set of words in
00:51:59.475 --> 00:52:00.310
a analogy?
00:52:02.840 --> 00:52:03.290
OK.
00:52:05.610 --> 00:52:06.180
Kanji.
00:52:06.590 --> 00:52:07.660
Let's see if she can.
00:52:10.210 --> 00:52:11.820
Others in but what's yours?
00:52:11.820 --> 00:52:12.145
Said?
00:52:12.145 --> 00:52:12.740
What's your?
00:52:15.810 --> 00:52:17.290
Cockroaches to Mexican.
00:52:21.740 --> 00:52:22.290
OK.
00:52:23.830 --> 00:52:26.270
So I'll Add tackers.
00:52:27.500 --> 00:52:28.910
Mexican.
00:52:33.590 --> 00:52:40.840
Pizza and Italian.
00:52:43.730 --> 00:52:45.540
Tell me.
00:52:50.000 --> 00:52:51.190
Right, so tacos.
00:52:51.330 --> 00:52:51.680
I.
00:52:52.830 --> 00:52:55.440
Tacos is to pizza.
00:52:55.440 --> 00:52:58.630
And I think like when they do this test
00:52:58.630 --> 00:53:00.177
they have like they're not doing it out
00:53:00.177 --> 00:53:02.020
of all 30,000 words or whatever.
00:53:02.020 --> 00:53:03.880
I think they're doing it out of some
00:53:03.880 --> 00:53:05.050
Sub candidates.
00:53:05.050 --> 00:53:06.600
So kind of like what we're doing here.
00:53:07.470 --> 00:53:10.630
As tacos is the pizza as what?
00:53:14.280 --> 00:53:15.830
Talker system Mexican.
00:53:19.930 --> 00:53:20.360
Pizza.
00:53:23.370 --> 00:53:24.460
It should work.
00:53:24.460 --> 00:53:26.309
It's just Add addition, so it should
00:53:26.310 --> 00:53:26.630
be.
00:53:27.450 --> 00:53:29.600
It should be recompose able.
00:53:30.730 --> 00:53:31.670
Alright.
00:53:31.670 --> 00:53:33.480
And then I never understand, like which
00:53:33.480 --> 00:53:36.090
one they're asking me to press them on.
00:53:39.490 --> 00:53:43.820
I'm supposed to do what is it, pizza.
00:53:46.240 --> 00:53:47.450
OK, I'll try it.
00:53:55.780 --> 00:53:57.320
Alright, let me try to fix that.
00:53:58.330 --> 00:53:58.970
00:54:00.730 --> 00:54:02.120
This Demo is killing me.
00:54:06.180 --> 00:54:07.300
There we go.
00:54:07.300 --> 00:54:08.270
All right, pizza.
00:54:17.280 --> 00:54:18.870
I think it's just processing.
00:54:20.970 --> 00:54:21.700
OK, there we go.
00:54:21.700 --> 00:54:22.880
Pizza is 2 Italian.
00:54:25.650 --> 00:54:28.230
I want to I'm going to have to go with
00:54:28.230 --> 00:54:30.310
other things, but do do feel free to
00:54:30.310 --> 00:54:30.940
try.
00:54:36.310 --> 00:54:37.200
Alright, thank you.
00:54:43.600 --> 00:54:44.770
What was your analogy?
00:54:44.770 --> 00:54:47.110
Chicken is to basketball as what?
00:54:49.690 --> 00:54:51.130
It just like makes something up and see
00:54:51.130 --> 00:54:52.890
what it comes up with, yeah.
00:55:08.580 --> 00:55:10.890
So now we're going to talk about
00:55:10.890 --> 00:55:11.560
Attention.
00:55:12.310 --> 00:55:16.395
And so far we've talked about linear
00:55:16.395 --> 00:55:18.260
linear processing.
00:55:18.260 --> 00:55:20.180
You just take some set of features and
00:55:20.180 --> 00:55:22.220
you multiply it by weights and sum up
00:55:22.220 --> 00:55:24.850
the sum of the product.
00:55:25.710 --> 00:55:28.520
We talked about Convolution, which is
00:55:28.520 --> 00:55:30.421
basically just when you apply a linear
00:55:30.421 --> 00:55:31.440
operator over Windows.
00:55:31.440 --> 00:55:33.340
So you can even do this in text as
00:55:33.340 --> 00:55:33.650
well.
00:55:33.650 --> 00:55:36.250
But for images you Apply within like
00:55:36.250 --> 00:55:39.009
little pixel patches, you apply the
00:55:39.010 --> 00:55:41.290
same linear operator to each patch and
00:55:41.290 --> 00:55:42.700
return the result, and then you get
00:55:42.700 --> 00:55:44.600
back like a new map of features.
00:55:45.940 --> 00:55:47.570
So now we're going to introduce a brand
00:55:47.570 --> 00:55:51.860
new type of kind of processing, which
00:55:51.860 --> 00:55:54.086
is called Attention and the basic idea
00:55:54.086 --> 00:55:54.759
of Attention.
00:55:55.660 --> 00:55:58.310
Is that you're given a set of key value
00:55:58.310 --> 00:56:01.650
pairs, and I'll explain what that means
00:56:01.650 --> 00:56:04.150
in the next slide and a query, and then
00:56:04.150 --> 00:56:07.145
the output of the Attention model or of
00:56:07.145 --> 00:56:09.400
the Attention function is a sum of the
00:56:09.400 --> 00:56:11.260
values weighted by the key query
00:56:11.260 --> 00:56:11.960
similarity.
00:56:14.930 --> 00:56:15.960
So.
00:56:17.530 --> 00:56:21.920
The in Cross Attention you have like a
00:56:21.920 --> 00:56:23.060
key value pair.
00:56:23.060 --> 00:56:25.540
So the where the key is used for
00:56:25.540 --> 00:56:27.109
matching and the value is used to
00:56:27.110 --> 00:56:27.840
output.
00:56:27.840 --> 00:56:30.109
So one example is that the key could be
00:56:30.110 --> 00:56:31.780
your features and the value could be
00:56:31.780 --> 00:56:33.170
the thing that you want to predict.
00:56:34.130 --> 00:56:36.140
And then you have some query which is
00:56:36.140 --> 00:56:37.950
something that you want to compute a
00:56:37.950 --> 00:56:38.750
value for.
00:56:39.620 --> 00:56:42.052
And you use the query, you match the
00:56:42.052 --> 00:56:44.920
query to the keys, and then you sum the
00:56:44.920 --> 00:56:47.800
values based on those similarities to
00:56:47.800 --> 00:56:50.700
get your value for the query.
00:56:50.700 --> 00:56:52.410
So mathematically, it's kind of like
00:56:52.410 --> 00:56:54.010
simpler mathematically than it is
00:56:54.010 --> 00:56:54.520
verbally.
00:56:55.260 --> 00:56:58.140
So mathematically you have some
00:56:58.140 --> 00:56:59.810
similarity function that says how
00:56:59.810 --> 00:57:02.150
similar some key is to some query.
00:57:02.150 --> 00:57:03.820
So this could be like a dot Product for
00:57:03.820 --> 00:57:04.390
example.
00:57:05.900 --> 00:57:08.800
Or you can distance or 1 divided by
00:57:08.800 --> 00:57:10.760
Euclidean distance and then you have
00:57:10.760 --> 00:57:14.915
your values and you take this sum over
00:57:14.915 --> 00:57:17.580
the similarity of each key times the
00:57:17.580 --> 00:57:19.920
query and multiply it by the value.
00:57:20.810 --> 00:57:22.570
And then you normalize it or divide it
00:57:22.570 --> 00:57:24.530
by the sum of all those similarities,
00:57:24.530 --> 00:57:26.640
which is like equivalent to making
00:57:26.640 --> 00:57:28.320
these similarities sum to one.
00:57:29.130 --> 00:57:33.950
So the output value for Q will just be
00:57:33.950 --> 00:57:37.240
a weighted average of the input values,
00:57:37.240 --> 00:57:39.315
where the weights are proportional to
00:57:39.315 --> 00:57:40.150
the similarity.
00:57:44.090 --> 00:57:46.030
So let's see it for some simple
00:57:46.030 --> 00:57:46.960
examples.
00:57:48.140 --> 00:57:49.830
So let's say that our similarity
00:57:49.830 --> 00:57:52.900
function is just 1 / K -, Q ^2.
00:57:54.890 --> 00:57:57.280
And I've got these key value pairs.
00:57:57.280 --> 00:57:59.976
So here maybe this is a label like one
00:57:59.976 --> 00:58:00.907
-, 1.
00:58:00.907 --> 00:58:03.991
And I've got one Feature here 175.
00:58:03.991 --> 00:58:07.487
So this is like one data element, this
00:58:07.487 --> 00:58:09.836
is the key and this is the value.
00:58:09.836 --> 00:58:11.692
And then this is another key and its
00:58:11.692 --> 00:58:13.339
value and another key and its value.
00:58:14.580 --> 00:58:17.550
And then I've got some query which is
00:58:17.550 --> 00:58:18.350
4.
00:58:19.330 --> 00:58:22.560
So I'm going to compare query to each
00:58:22.560 --> 00:58:26.736
of these keys 175, so the distance K -,
00:58:26.736 --> 00:58:30.443
Q for the first one is 3, so K 1 / K -,
00:58:30.443 --> 00:58:32.600
Q ^2 is 1 / 3 ^2.
00:58:33.370 --> 00:58:35.190
Then I multiply it by the value.
00:58:36.390 --> 00:58:38.810
Then I have 7 -, 4 ^2.
00:58:40.360 --> 00:58:43.400
Multiply that by the value -, 1, and
00:58:43.400 --> 00:58:45.680
then 5 -, 4, ^2, 1.
00:58:45.680 --> 00:58:48.720
Over that, multiply it by the value -,
00:58:48.720 --> 00:58:51.730
1 and then I divide it by each of these
00:58:51.730 --> 00:58:54.891
like similarities 1 / 3 ^2 1 / 3, ^2, 1
00:58:54.891 --> 00:58:59.019
/ 1 ^2 and then the output is -, 8.18.
00:58:59.020 --> 00:59:01.536
So this query was closer to the
00:59:01.536 --> 00:59:02.420
negative numbers.
00:59:03.750 --> 00:59:05.260
Then or at least closer to this
00:59:05.260 --> 00:59:08.050
negative, then there's five number then
00:59:08.050 --> 00:59:10.260
it was to the positive number.
00:59:11.090 --> 00:59:13.670
And so the output is negative of
00:59:13.670 --> 00:59:15.440
corresponding to the value here.
00:59:15.440 --> 00:59:17.760
So these two end up canceling out
00:59:17.760 --> 00:59:19.514
because they're equally far away, and
00:59:19.514 --> 00:59:21.261
one has a value of 1 and one has a
00:59:21.261 --> 00:59:21.959
value of -, 1.
00:59:22.570 --> 00:59:26.065
And then this one has more influence.
00:59:26.065 --> 00:59:27.750
They sort of cancel, they cancel out
00:59:27.750 --> 00:59:29.620
and the numerator, but they still get
00:59:29.620 --> 00:59:31.190
wait, so they still appear in the
00:59:31.190 --> 00:59:31.720
denominator.
00:59:34.550 --> 00:59:37.330
As another example, if my input is 0,
00:59:37.330 --> 00:59:40.290
then the distance to this is 1 / 1 ^2.
00:59:40.290 --> 00:59:41.090
For this it's.
00:59:42.440 --> 00:59:43.610
OK, did that wrong.
00:59:43.610 --> 00:59:47.109
Should be 1 / 7 squared and for this it
00:59:47.109 --> 00:59:47.523
should be.
00:59:47.523 --> 00:59:49.210
For some reason I change it to A1 when
00:59:49.210 --> 00:59:50.793
I was calculating here, but this should
00:59:50.793 --> 00:59:51.800
be 1 / 5 ^2.
00:59:52.510 --> 00:59:55.200
And then I so I compute the similarity
00:59:55.200 --> 00:59:58.200
to each of these, so it's, so it's
00:59:58.200 --> 00:59:59.637
11140 nine 125th.
00:59:59.637 --> 01:00:02.125
And then the value is one negative one
01:00:02.125 --> 01:00:02.567
-, 1.
01:00:02.567 --> 01:00:04.405
So I made a mistake here when I did it
01:00:04.405 --> 01:00:06.980
by hand, but you get the idea I hope.
01:00:06.980 --> 01:00:08.370
And then I divide by the sum of
01:00:08.370 --> 01:00:11.055
similarities and then the output is
01:00:11.055 --> 01:00:12.130
.834.
01:00:13.100 --> 01:00:14.630
Which makes sense, because it's closer
01:00:14.630 --> 01:00:16.160
to this one than it is to the negative
01:00:16.160 --> 01:00:16.500
ones.
01:00:23.780 --> 01:00:24.890
If the query.
01:00:24.890 --> 01:00:28.020
So my similarity function was not the
01:00:28.020 --> 01:00:29.660
best similarity function for that
01:00:29.660 --> 01:00:32.510
reason, so I change it and when I do
01:00:32.510 --> 01:00:34.610
Self Attention I change the similarity
01:00:34.610 --> 01:00:36.380
function to plus one on the bottom so I
01:00:36.380 --> 01:00:37.690
don't have to divide by zero.
01:00:37.690 --> 01:00:38.570
But yeah.
01:00:39.470 --> 01:00:40.810
And here I'm just using.
01:00:40.810 --> 01:00:42.840
You'll see in practice a different
01:00:42.840 --> 01:00:44.450
similarity function is usually used,
01:00:44.450 --> 01:00:46.110
but it's hard to like manually compute,
01:00:46.110 --> 01:00:48.130
so I am using a very simple one here.
01:00:49.510 --> 01:00:52.160
So this is Cross Attention is basically
01:00:52.160 --> 01:00:55.280
that to get the.
01:00:55.360 --> 01:00:57.680
To get the value of some query, you
01:00:57.680 --> 01:01:00.159
compute the similarity of the query to
01:01:00.160 --> 01:01:02.170
each of the keys, and then you take a
01:01:02.170 --> 01:01:04.065
weighted average of the values that's
01:01:04.065 --> 01:01:05.870
weighted by the key query similarity.
01:01:07.360 --> 01:01:11.240
Self Attention is that the key is equal
01:01:11.240 --> 01:01:14.060
to the value and each key is also a
01:01:14.060 --> 01:01:14.710
query.
01:01:14.710 --> 01:01:16.920
So in other words, you just have like a
01:01:16.920 --> 01:01:19.557
group, you just have like a bunch of
01:01:19.557 --> 01:01:21.912
values and you match those values to
01:01:21.912 --> 01:01:23.430
each other and you take a weighted
01:01:23.430 --> 01:01:24.840
average of those values according to
01:01:24.840 --> 01:01:25.740
how similar they are.
01:01:26.630 --> 01:01:28.110
And as you'll see, it's like a kind of
01:01:28.110 --> 01:01:28.890
clustering.
01:01:29.920 --> 01:01:31.620
Here are my Input.
01:01:31.620 --> 01:01:35.450
Can just be 3 numbers and each of these
01:01:35.450 --> 01:01:38.190
I will treat as a key and a query pair
01:01:38.190 --> 01:01:42.150
so I'll have like 117755 and I also
01:01:42.150 --> 01:01:45.047
have three queries which are like 1-7
01:01:45.047 --> 01:01:45.876
and five.
01:01:45.876 --> 01:01:48.810
So here I did the computation out for
01:01:48.810 --> 01:01:51.270
the query one so I get one.
01:01:51.380 --> 01:01:59.662
So I get 1 / 1 * 1 + 1 / 6 ^2 + 1 * 7 +
01:01:59.662 --> 01:02:03.450
1 / 4 ^2 * + 1 * 5.
01:02:04.140 --> 01:02:06.210
And then divide by the similarities.
01:02:06.800 --> 01:02:12.174
And I get 1.11 point 3-7 and then if I
01:02:12.174 --> 01:02:14.616
do it for seven I get 6.54 and if I do
01:02:14.616 --> 01:02:15.060
it for.
01:02:16.470 --> 01:02:18.180
Five, I get 5.13.
01:02:19.510 --> 01:02:21.120
And I can apply iteratively.
01:02:21.120 --> 01:02:23.480
So if I apply it again the same exact
01:02:23.480 --> 01:02:26.132
operation but now on these values of
01:02:26.132 --> 01:02:28.210
1.376 point 545.13.
01:02:28.850 --> 01:02:30.900
Then I get this, and then I do it again
01:02:30.900 --> 01:02:33.420
and you can see that it's like quickly
01:02:33.420 --> 01:02:35.330
bringing the seven and the five close
01:02:35.330 --> 01:02:35.760
together.
01:02:35.760 --> 01:02:38.387
And it's also in the case of the
01:02:38.387 --> 01:02:39.642
similarity function, like bringing
01:02:39.642 --> 01:02:40.270
everything together.
01:02:40.270 --> 01:02:42.660
So it's kind of doing a clustering, but
01:02:42.660 --> 01:02:44.620
where it brings it depends on my
01:02:44.620 --> 01:02:46.360
similarity function, but it brings like
01:02:46.360 --> 01:02:49.106
very similar things very close
01:02:49.106 --> 01:02:49.788
together.
01:02:49.788 --> 01:02:52.340
And over time if I do enough of it will
01:02:52.340 --> 01:02:53.980
bring like everything close together.
01:02:55.170 --> 01:02:56.770
So here's another example where my
01:02:56.770 --> 01:03:01.329
input is 1982 and here's after one
01:03:01.330 --> 01:03:04.118
iteration, 2 iterations, 3 iterations,
01:03:04.118 --> 01:03:05.152
4 iterations.
01:03:05.152 --> 01:03:07.395
So you can see that after just two
01:03:07.395 --> 01:03:09.000
iterations, it's essentially brought
01:03:09.000 --> 01:03:11.189
the nine and the eight to be the same
01:03:11.189 --> 01:03:13.623
value and the one and the two to be the
01:03:13.623 --> 01:03:14.109
same value.
01:03:15.000 --> 01:03:18.840
And then if I keep doing it, will it'll
01:03:18.840 --> 01:03:20.220
like bring them closer?
01:03:20.220 --> 01:03:21.820
Yeah, eventually they'll all be the
01:03:21.820 --> 01:03:22.070
same.
01:03:23.860 --> 01:03:25.730
But if I had other kinds of similarity
01:03:25.730 --> 01:03:28.015
functions, an exponential function, it
01:03:28.015 --> 01:03:30.390
would take a lot longer to bring them
01:03:30.390 --> 01:03:32.480
all together because the IT would be a
01:03:32.480 --> 01:03:34.050
lot more Peaky, and in fact that's
01:03:34.050 --> 01:03:35.020
what's used in practice.
01:03:35.770 --> 01:03:37.394
So you can think about this.
01:03:37.394 --> 01:03:39.289
You can think about this Attention as
01:03:39.290 --> 01:03:40.580
doing like two different things.
01:03:40.580 --> 01:03:43.140
If you apply as Cross Attention, then
01:03:43.140 --> 01:03:44.610
you're basically transferring the
01:03:44.610 --> 01:03:47.080
associations of 1 set of data elements
01:03:47.080 --> 01:03:49.860
to a new data element.
01:03:49.860 --> 01:03:53.686
So you had the association of 1 maps to
01:03:53.686 --> 01:03:57.645
17 maps to -, 1, five maps to -, 1, and
01:03:57.645 --> 01:04:00.780
so my expected value of four is some
01:04:00.780 --> 01:04:03.260
weighted average of these of these
01:04:03.260 --> 01:04:06.100
values, and it's -, .8, or my expected
01:04:06.100 --> 01:04:07.180
value of 0.
01:04:07.440 --> 01:04:09.020
Is a weighted average of these by
01:04:09.020 --> 01:04:12.190
similarity and it's positive point.
01:04:13.700 --> 01:04:15.270
So it's a kind of like near weighted
01:04:15.270 --> 01:04:16.600
nearest neighbor essentially?
01:04:18.330 --> 01:04:20.540
Or in the case of Self Attention, it's
01:04:20.540 --> 01:04:22.740
a kind of like clustering where you're
01:04:22.740 --> 01:04:26.255
grouping together similar elements and
01:04:26.255 --> 01:04:29.010
like aggregating information across
01:04:29.010 --> 01:04:30.040
these tokens.
01:04:33.480 --> 01:04:35.970
So Cross Attention is an instance based
01:04:35.970 --> 01:04:36.750
regression.
01:04:36.750 --> 01:04:38.750
Your computer you're averaging a value
01:04:38.750 --> 01:04:41.330
based on the nearby other nearby.
01:04:41.460 --> 01:04:41.890
I.
01:04:43.750 --> 01:04:44.240
Keys.
01:04:45.340 --> 01:04:48.080
And Self Attention is a soft cluster
01:04:48.080 --> 01:04:50.570
aggregator and it's important to note
01:04:50.570 --> 01:04:52.620
that in this case, like for simplicity,
01:04:52.620 --> 01:04:54.890
I'm just saying that their values are
01:04:54.890 --> 01:04:55.630
scalars.
01:04:56.480 --> 01:04:58.940
And so it looks like the value it's
01:04:58.940 --> 01:05:00.880
just like replacing it and everything
01:05:00.880 --> 01:05:02.400
will eventually merge to one.
01:05:02.400 --> 01:05:04.790
But in practice you're applying this to
01:05:04.790 --> 01:05:06.510
large vectors, large continuous
01:05:06.510 --> 01:05:09.410
vectors, and so the distances can be
01:05:09.410 --> 01:05:10.370
much bigger.
01:05:10.370 --> 01:05:13.830
And the and you can when you add
01:05:13.830 --> 01:05:15.907
multiple dimensional multidimensional
01:05:15.907 --> 01:05:18.590
vectors you can overlay information.
01:05:18.590 --> 01:05:22.230
So similar to if different, you could
01:05:22.230 --> 01:05:25.210
have like a an audio stream where
01:05:25.210 --> 01:05:26.520
you've got music playing in the
01:05:26.520 --> 01:05:28.390
background and two people are talking
01:05:28.390 --> 01:05:28.490
at.
01:05:28.540 --> 01:05:31.280
Months, and you can separate that into
01:05:31.280 --> 01:05:33.140
each person talking in the audio
01:05:33.140 --> 01:05:33.508
stream.
01:05:33.508 --> 01:05:36.216
All of those signals are overlaid on
01:05:36.216 --> 01:05:38.070
each other, but the signals are all
01:05:38.070 --> 01:05:38.590
still there.
01:05:38.590 --> 01:05:40.148
They don't completely interfere with
01:05:40.148 --> 01:05:40.877
each other.
01:05:40.877 --> 01:05:43.314
And in the same way, when you have high
01:05:43.314 --> 01:05:45.410
dimensional vectors, when you're
01:05:45.410 --> 01:05:47.620
averaging those vectors, like with this
01:05:47.620 --> 01:05:49.710
operation, you're not necessarily
01:05:49.710 --> 01:05:51.629
replacing information, you're actually
01:05:51.630 --> 01:05:52.790
adding information.
01:05:52.790 --> 01:05:54.140
So you end up with some high
01:05:54.140 --> 01:05:55.720
dimensional vector that actually
01:05:55.720 --> 01:05:57.698
contains the information in each of
01:05:57.698 --> 01:05:58.009
those.
01:05:58.240 --> 01:05:59.840
Each of those vectors that you were
01:05:59.840 --> 01:06:00.810
adding into it.
01:06:02.880 --> 01:06:06.370
And so it's not just a pure clustering
01:06:06.370 --> 01:06:08.445
where you're like simplifying, it's
01:06:08.445 --> 01:06:10.345
Adding you're adding information,
01:06:10.345 --> 01:06:13.210
you're aggregating information across
01:06:13.210 --> 01:06:14.470
your different tokens.
01:06:16.500 --> 01:06:19.120
So this becomes extremely powerful as
01:06:19.120 --> 01:06:20.440
represented by trogdor.
01:06:21.280 --> 01:06:24.140
And in general, when you combine it
01:06:24.140 --> 01:06:26.340
with learned similarity functions and
01:06:26.340 --> 01:06:28.570
nonlinear Feature transformations.
01:06:30.240 --> 01:06:32.260
So this finally brings us to the
01:06:32.260 --> 01:06:33.270
transformer.
01:06:34.500 --> 01:06:37.430
And the transformer is just an
01:06:37.430 --> 01:06:40.090
application of this Attention idea
01:06:40.090 --> 01:06:43.520
where you define the similarity is.
01:06:45.440 --> 01:06:46.580
Really wonder what is like.
01:06:46.580 --> 01:06:48.240
Always screwing up there where you
01:06:48.240 --> 01:06:50.460
define the similarity as each of the
01:06:50.460 --> 01:06:51.075
dot Product.
01:06:51.075 --> 01:06:53.600
So it's basically a softmax operation
01:06:53.600 --> 01:06:55.000
to get your weighted similarities.
01:06:55.000 --> 01:06:57.130
So when you do the softmax, you divide
01:06:57.130 --> 01:06:58.670
by the sum of these similarities to
01:06:58.670 --> 01:07:00.020
make it sum to one.
01:07:00.980 --> 01:07:04.700
So it's your key dot query E to the key
01:07:04.700 --> 01:07:06.250
dot query is your similarity.
01:07:06.890 --> 01:07:08.260
And then they also have some
01:07:08.260 --> 01:07:10.410
normalization by the dimensionality of
01:07:10.410 --> 01:07:11.980
the keys, because otherwise, like if
01:07:11.980 --> 01:07:13.257
you have really long vectors, then
01:07:13.257 --> 01:07:14.600
you'll always tend to be like pretty
01:07:14.600 --> 01:07:15.230
far away.
01:07:15.230 --> 01:07:17.190
And so this normalizes it so that for
01:07:17.190 --> 01:07:18.700
different length vectors you'll still
01:07:18.700 --> 01:07:20.594
have like a unit Norm kind of
01:07:20.594 --> 01:07:22.229
similarity, a unit length similarity
01:07:22.230 --> 01:07:22.630
typically.
01:07:24.730 --> 01:07:26.720
And then you multiply it by the value.
01:07:26.720 --> 01:07:28.590
So here it's represented in matrix
01:07:28.590 --> 01:07:30.252
operation, so you have an outer product
01:07:30.252 --> 01:07:32.170
of all your queries times all your
01:07:32.170 --> 01:07:32.650
keys.
01:07:32.650 --> 01:07:34.230
So that gives you a matrix of the
01:07:34.230 --> 01:07:36.380
similarity of each query to each key.
01:07:37.340 --> 01:07:38.840
And you take a softmax.
01:07:39.420 --> 01:07:42.210
So then you are normalizing, you're
01:07:42.210 --> 01:07:42.970
competing those.
01:07:44.060 --> 01:07:46.420
The similarity score for each key in
01:07:46.420 --> 01:07:48.100
query, so this will still be a matrix.
01:07:48.790 --> 01:07:51.400
You multiply it by your values and now
01:07:51.400 --> 01:07:55.755
you have a new value Vector for each of
01:07:55.755 --> 01:07:56.780
your queries.
01:07:58.130 --> 01:07:59.740
So it's just doing the same thing, but
01:07:59.740 --> 01:08:01.800
with matrix operations for efficiency.
01:08:01.800 --> 01:08:04.780
And this is like very great for GPUs.
01:08:04.780 --> 01:08:08.400
GPUs can do this super fast and tpus
01:08:08.400 --> 01:08:09.430
can do it even faster.
01:08:11.040 --> 01:08:12.630
There's our tensor processing units.
01:08:14.110 --> 01:08:16.000
And then this is just that represented
01:08:16.000 --> 01:08:16.720
as a diagram.
01:08:16.720 --> 01:08:19.160
So key and query comes in, you get
01:08:19.160 --> 01:08:21.410
matrix multiplied Scaled by this thing.
01:08:22.070 --> 01:08:24.350
Then softmax and then another matrix
01:08:24.350 --> 01:08:25.000
multiply.
01:08:26.840 --> 01:08:29.800
And you can learn the similarity
01:08:29.800 --> 01:08:32.207
function with a linear layer, and you
01:08:32.207 --> 01:08:34.220
can even learn multiple similarity
01:08:34.220 --> 01:08:35.110
functions.
01:08:35.110 --> 01:08:36.600
So first, let's say we're doing a
01:08:36.600 --> 01:08:38.210
single head transformer.
01:08:38.210 --> 01:08:41.755
So that means that basically we pass in
01:08:41.755 --> 01:08:43.480
our value, our key in our query.
01:08:44.300 --> 01:08:46.480
We pass them through some linear layer
01:08:46.480 --> 01:08:48.310
that transforms them into a new
01:08:48.310 --> 01:08:48.800
Embedding.
01:08:49.530 --> 01:08:51.420
And then we take the dot Product, do
01:08:51.420 --> 01:08:53.240
the same dot Product detention that I
01:08:53.240 --> 01:08:53.980
just showed.
01:08:53.980 --> 01:08:56.750
So this allows it to learn like you can
01:08:56.750 --> 01:08:58.920
pass in like the same values as value,
01:08:58.920 --> 01:08:59.460
key and query.
01:08:59.460 --> 01:09:00.650
And it can say, well, I'm going to use
01:09:00.650 --> 01:09:02.320
like this aspect of the data to compute
01:09:02.320 --> 01:09:04.310
the similarity and then I'm going to
01:09:04.310 --> 01:09:07.204
sum over this other aspect of the data
01:09:07.204 --> 01:09:08.550
to produce my output.
01:09:10.220 --> 01:09:12.550
And then to make it like just a little
01:09:12.550 --> 01:09:14.980
bit more complicated they do, you can
01:09:14.980 --> 01:09:17.453
do a Multi head Attention which is when
01:09:17.453 --> 01:09:21.040
you have multiple linear models and if
01:09:21.040 --> 01:09:23.878
you're Input say has 100 dimensions and
01:09:23.878 --> 01:09:26.450
you have ten of these heads, then each
01:09:26.450 --> 01:09:28.693
of these linear models maps from 100 to
01:09:28.693 --> 01:09:30.770
10 so it maps into a 10 dimensional
01:09:30.770 --> 01:09:31.750
similarity space.
01:09:32.490 --> 01:09:34.665
And then you do the same operation and
01:09:34.665 --> 01:09:36.630
then you concatenate at the end to get
01:09:36.630 --> 01:09:39.440
back 100 dimensional Vector.
01:09:40.810 --> 01:09:43.330
So this allows the transformer to
01:09:43.330 --> 01:09:46.230
compare these like continuous vectors
01:09:46.230 --> 01:09:48.850
in different learned ways and then
01:09:48.850 --> 01:09:51.130
aggregate, aggregate like different
01:09:51.130 --> 01:09:52.470
aspects of the values and then
01:09:52.470 --> 01:09:54.670
recombine it into the original length
01:09:54.670 --> 01:09:55.190
Vector.
01:09:59.680 --> 01:10:01.570
So putting these together, we get
01:10:01.570 --> 01:10:03.890
what's called the transformer.
01:10:03.890 --> 01:10:05.820
Transformer is a general data
01:10:05.820 --> 01:10:08.680
processor, so you have some kind of
01:10:08.680 --> 01:10:10.630
Vector set of vectors coming in.
01:10:10.630 --> 01:10:11.990
These could be like your Word2Vec
01:10:11.990 --> 01:10:13.030
Representations.
01:10:14.500 --> 01:10:16.380
So you just have a bunch of these Word
01:10:16.380 --> 01:10:17.580
vectors coming in.
01:10:17.580 --> 01:10:19.250
It could also be as we'll see in the
01:10:19.250 --> 01:10:21.440
next class, like image patches or other
01:10:21.440 --> 01:10:22.450
kinds of data.
01:10:23.580 --> 01:10:24.140
This.
01:10:24.140 --> 01:10:26.090
Note that all of these operations are
01:10:26.090 --> 01:10:28.810
position invariant, so when I'm taking
01:10:28.810 --> 01:10:29.810
the.
01:10:30.460 --> 01:10:32.450
Like it doesn't when I when I do these
01:10:32.450 --> 01:10:33.390
kinds of operations.
01:10:33.390 --> 01:10:35.330
It doesn't matter what order I store
01:10:35.330 --> 01:10:36.860
these pairs in, the output for the
01:10:36.860 --> 01:10:38.070
query is going to be the same.
01:10:38.980 --> 01:10:41.010
And so often you add what's called a
01:10:41.010 --> 01:10:45.600
Positional Embedding to your input, so
01:10:45.600 --> 01:10:48.222
that the position is stored as part of
01:10:48.222 --> 01:10:50.170
your like value or as part of the
01:10:50.170 --> 01:10:50.640
Vector.
01:10:53.120 --> 01:10:55.230
And that allows it to know like whether
01:10:55.230 --> 01:10:56.820
2 words are next to each other or not.
01:10:59.650 --> 01:11:01.760
Positional Embedding is.
01:11:02.430 --> 01:11:05.629
In practice, you take like a in
01:11:05.630 --> 01:11:07.183
language for example, you would have
01:11:07.183 --> 01:11:09.390
like a floating point or some number to
01:11:09.390 --> 01:11:10.890
represent where the Word appears in a
01:11:10.890 --> 01:11:11.380
sequence.
01:11:12.100 --> 01:11:13.890
And then you process it through, pass
01:11:13.890 --> 01:11:16.819
it through sines and cosines of
01:11:16.820 --> 01:11:20.530
different frequencies to create like a
01:11:20.530 --> 01:11:22.400
Vector that's the same size as the
01:11:22.400 --> 01:11:24.334
original like Word Vector.
01:11:24.334 --> 01:11:27.574
And then you add it to the Word Vector.
01:11:27.574 --> 01:11:29.705
And the reason for using the signs and
01:11:29.705 --> 01:11:31.410
cosines is because if you take the dot
01:11:31.410 --> 01:11:33.280
product of these Positional embeddings.
01:11:33.900 --> 01:11:36.376
Then the similarity corresponds to
01:11:36.376 --> 01:11:38.400
their distance to their.
01:11:38.400 --> 01:11:41.019
It's like monotonic with their like
01:11:41.020 --> 01:11:41.940
Euclidean distance.
01:11:42.650 --> 01:11:45.191
So normally if you take a position X
01:11:45.191 --> 01:11:47.560
and you take the dot product of another
01:11:47.560 --> 01:11:51.120
X, then that doesn't tell you that
01:11:51.120 --> 01:11:52.520
doesn't correspond to similarity,
01:11:52.520 --> 01:11:52.720
right?
01:11:52.720 --> 01:11:54.075
Because if either one of them gets
01:11:54.075 --> 01:11:55.935
bigger than that dot Product just gets
01:11:55.935 --> 01:11:56.410
bigger.
01:11:56.410 --> 01:11:57.997
But if you take the sine and cosines of
01:11:57.997 --> 01:11:59.969
X and then you take the dot product of
01:11:59.970 --> 01:12:02.234
those sines and cosines, then if the
01:12:02.234 --> 01:12:04.360
2X's are close together then their
01:12:04.360 --> 01:12:05.910
similarity will be higher and that
01:12:05.910 --> 01:12:06.680
representation.
01:12:09.650 --> 01:12:12.920
And so you have this transformer block,
01:12:12.920 --> 01:12:14.946
so you apply the multi head attention.
01:12:14.946 --> 01:12:18.790
Then you apply a two layer MLP multi
01:12:18.790 --> 01:12:19.636
linear perceptron.
01:12:19.636 --> 01:12:22.250
You have Skip connections around each
01:12:22.250 --> 01:12:22.770
of them.
01:12:22.770 --> 01:12:24.460
That's like these errors here.
01:12:24.460 --> 01:12:26.570
And then there's what's called a layer
01:12:26.570 --> 01:12:27.720
Norm, which is just.
01:12:29.070 --> 01:12:31.130
Subtracting the mean and dividing the
01:12:31.130 --> 01:12:33.710
steering deviation of all the tokens
01:12:33.710 --> 01:12:34.600
within each layer.
01:12:35.580 --> 01:12:37.320
And you can just stack these on top of
01:12:37.320 --> 01:12:39.310
each other to do like multiple layers
01:12:39.310 --> 01:12:40.270
of processing.
01:12:43.670 --> 01:12:45.330
So this is a little more about the
01:12:45.330 --> 01:12:46.025
Positional embeddings.
01:12:46.025 --> 01:12:48.080
I forgot that I had this detail here.
01:12:49.230 --> 01:12:50.920
So this is how you compute the
01:12:50.920 --> 01:12:51.657
Positional embeddings.
01:12:51.657 --> 01:12:54.110
So you just Define these like sines and
01:12:54.110 --> 01:12:55.382
cosines of different frequencies.
01:12:55.382 --> 01:12:57.368
This is like the two to the I thing.
01:12:57.368 --> 01:13:01.718
So this is just dividing by mapping it
01:13:01.718 --> 01:13:05.380
into like a smaller the original
01:13:05.380 --> 01:13:07.350
integer position into a smaller value
01:13:07.350 --> 01:13:08.380
before computing that.
01:13:11.800 --> 01:13:12.910
And.
01:13:14.680 --> 01:13:17.900
So the transformer processing is a
01:13:17.900 --> 01:13:21.910
little bit like it's it can act kind of
01:13:21.910 --> 01:13:22.810
like Convolution.
01:13:22.810 --> 01:13:25.510
So in Convolution you're comparing like
01:13:25.510 --> 01:13:27.450
each pixel for example to the
01:13:27.450 --> 01:13:29.040
surrounding pixels and then computing
01:13:29.040 --> 01:13:30.450
some output based on that.
01:13:30.450 --> 01:13:33.145
And Transformers you're also you're
01:13:33.145 --> 01:13:34.990
comparing like each patch to the
01:13:34.990 --> 01:13:36.140
surrounding patches if you're in
01:13:36.140 --> 01:13:38.199
images, but it's not limited to the
01:13:38.200 --> 01:13:40.630
nearby ones, it's actually can operates
01:13:40.630 --> 01:13:42.700
over everything, all the other data
01:13:42.700 --> 01:13:43.160
that's being.
01:13:43.220 --> 01:13:44.760
Processed at the same time.
01:13:49.830 --> 01:13:51.950
So here's the Complete language
01:13:51.950 --> 01:13:53.850
transformer that's in this Attention is
01:13:53.850 --> 01:13:55.040
all you need paper.
01:13:55.790 --> 01:13:58.320
So you have WordPiece tokens which we
01:13:58.320 --> 01:14:00.790
talked about that are mapped to 512
01:14:00.790 --> 01:14:03.510
dimensional vectors, and in this case
01:14:03.510 --> 01:14:05.460
these vectors are not learned by
01:14:05.460 --> 01:14:08.370
Word2Vec, they're instead just learned
01:14:08.370 --> 01:14:10.170
as part of the total transformer
01:14:10.170 --> 01:14:10.750
Training.
01:14:11.500 --> 01:14:13.380
You add a Positional Encoding to each
01:14:13.380 --> 01:14:14.150
Vector.
01:14:14.150 --> 01:14:17.176
Then you have a bunch of these Self
01:14:17.176 --> 01:14:18.830
Attention blocks that are added on top
01:14:18.830 --> 01:14:19.370
of each other.
01:14:19.370 --> 01:14:21.837
So the data the Inputs go through these
01:14:21.837 --> 01:14:22.850
Self Attention blocks.
01:14:23.650 --> 01:14:29.035
And then you also have like your output
01:14:29.035 --> 01:14:29.940
is added.
01:14:29.940 --> 01:14:32.230
So for example, first if you're trying
01:14:32.230 --> 01:14:34.980
to say what is the color of a banana.
01:14:36.170 --> 01:14:38.527
Then you process your what is the color
01:14:38.527 --> 01:14:40.790
of a banana here and then you generate
01:14:40.790 --> 01:14:43.150
the most likely output and then so
01:14:43.150 --> 01:14:44.580
maybe that the output then will be
01:14:44.580 --> 01:14:45.236
yellow.
01:14:45.236 --> 01:14:48.555
And then next you Feed yellow in here
01:14:48.555 --> 01:14:50.377
and again you take like the output of
01:14:50.377 --> 01:14:51.632
what is the color of a banana.
01:14:51.632 --> 01:14:53.392
And then you do Cross Attention with
01:14:53.392 --> 01:14:55.250
yellow and then hopefully it will
01:14:55.250 --> 01:14:56.972
output like N of sequence so it'll just
01:14:56.972 --> 01:14:59.049
say yellow or it could say like yellow,
01:14:59.050 --> 01:15:00.280
green or whatever.
01:15:00.280 --> 01:15:03.265
And so every time you output a new Word
01:15:03.265 --> 01:15:05.380
you consider all the words that were
01:15:05.380 --> 01:15:05.770
Input.
01:15:05.820 --> 01:15:07.230
As well as all the words that have been
01:15:07.230 --> 01:15:08.013
output so far.
01:15:08.013 --> 01:15:10.355
And then you output the next word, and
01:15:10.355 --> 01:15:12.380
then you keep on outputting one word at
01:15:12.380 --> 01:15:14.570
a time until you get to the end of
01:15:14.570 --> 01:15:17.510
sequence token, which means that you're
01:15:17.510 --> 01:15:17.780
done.
01:15:24.950 --> 01:15:28.310
So I'm pretty much done, but I'm going
01:15:28.310 --> 01:15:30.790
to wrap this up at the start of the
01:15:30.790 --> 01:15:31.230
next class.
01:15:31.230 --> 01:15:32.870
I'll talk about how you apply this to
01:15:32.870 --> 01:15:33.630
Translation.
01:15:34.300 --> 01:15:35.930
And I'll show these Attention
01:15:35.930 --> 01:15:39.274
Visualizations then, so we can see just
01:15:39.274 --> 01:15:41.435
I'll show you just one briefly.
01:15:41.435 --> 01:15:45.350
So for example, in this sentence, it is
01:15:45.350 --> 01:15:46.860
in this spirit that a majority of
01:15:46.860 --> 01:15:48.410
American governments have passed new
01:15:48.410 --> 01:15:49.889
laws since 2009, making the
01:15:49.890 --> 01:15:51.750
registration of voting process more
01:15:51.750 --> 01:15:52.160
difficult.
01:15:52.160 --> 01:15:52.570
EOS.
01:15:52.570 --> 01:15:56.060
Hubad it's like after you do these
01:15:56.060 --> 01:15:58.260
Transformers, you can see that there's
01:15:58.260 --> 01:16:00.640
like more the make the representation
01:16:00.640 --> 01:16:03.005
of making draws its meaning or draws
01:16:03.005 --> 01:16:05.200
it's like additional values from more
01:16:05.200 --> 01:16:05.790
difficult.
01:16:06.390 --> 01:16:10.062
Because making more difficult is like
01:16:10.062 --> 01:16:10.344
the.
01:16:10.344 --> 01:16:12.070
It's like the Syntactic like
01:16:12.070 --> 01:16:13.146
Relationship, right?
01:16:13.146 --> 01:16:15.160
It's making more difficult so it's able
01:16:15.160 --> 01:16:17.570
to like jump words and draw similarity,
01:16:17.570 --> 01:16:19.270
draw meaning from other words that are
01:16:19.270 --> 01:16:21.196
related to this Word in terms of the
01:16:21.196 --> 01:16:23.020
meaning or in terms of the syntax.
01:16:26.240 --> 01:16:26.930
Yeah.
01:16:28.600 --> 01:16:30.650
So I will show you some of these on
01:16:30.650 --> 01:16:32.810
Thursday and talk about how it's used
01:16:32.810 --> 01:16:33.850
for Translation.
01:16:33.850 --> 01:16:35.510
So that'll just take a little bit of
01:16:35.510 --> 01:16:36.100
time.
01:16:36.100 --> 01:16:39.110
And then I'm going to talk about the
01:16:39.110 --> 01:16:42.170
application of the Transformers using
01:16:42.170 --> 01:16:45.141
Bert, which is a very popular language
01:16:45.141 --> 01:16:47.863
model, as well as visit, which is a
01:16:47.863 --> 01:16:49.990
very popular vision model and Unified
01:16:49.990 --> 01:16:52.854
IO which is a vision language model.
01:16:52.854 --> 01:16:55.850
And once you once you like, these are
01:16:55.850 --> 01:16:57.640
all just basically Transformers like
01:16:57.640 --> 01:16:58.950
they're Architecture sections.
01:16:59.010 --> 01:17:01.450
Are and we use Transformers from
01:17:01.450 --> 01:17:04.780
Vaswani Idol like basically like doing
01:17:04.780 --> 01:17:07.020
nothing else like they have like all of
01:17:07.020 --> 01:17:08.170
these papers that are using
01:17:08.170 --> 01:17:09.963
Transformers have Architecture sections
01:17:09.963 --> 01:17:12.400
like that big because this single
01:17:12.400 --> 01:17:16.050
transformer block can do like any kind
01:17:16.050 --> 01:17:17.970
of processing.
01:17:17.970 --> 01:17:18.865
So.
01:17:18.865 --> 01:17:19.950
All right.
01:17:19.950 --> 01:17:21.340
So I'll pick it up next Thursday.
01:17:21.340 --> 01:17:22.110
Thank you.
01:17:25.840 --> 01:17:26.440
Office hours.