video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
randomly permute which means that the ordering of the letters in between has not much real information in it it could be any ordering so you don't need to stick to the original order in us here they do this crammed with we can still understand it so it means there's some redundancy it means that certain sequences are just not very likely and when you read this it's close to a sequence that you're familiar with and so you can easily map it onto that and still understand words that were there originally there's another example from
00:14:12
00:14:44
852
884
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=852s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
images so on the Left we see a bunch of real-world images of flowers in this case on the right we see random data the data on the left if your dataset looks like that it's very compressible because there are a lot of regularities at an intuitive level example often maybe pixels have roughly the same value whereas images on the right which are completely random there is no correlation between neighboring pixels that you can exploit to maybe compress how to represent the data on the right so two very different
00:14:44
00:15:20
884
920
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=884s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
distributions for the complete random distribution not clear how to compress for the real-world type data you can already intuitively see that there are opportunities to compress for example you could just think of every other bit now every other pixel it wouldn't be perfectly lossless we could probably reconstruct most of the image from that alright so what we've covered so far is what is compression what's the goal in compression and why might we care go from a practical point of view and from a AI point of view we also looked at the
00:15:20
00:15:57
920
957
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=920s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
fact that universal lossless compressor is just not possible we looked at some intuition as they're being redundancy in most of the data that we encounter in the real world and because there is redundancy I'm totally speaking there should be a way to exploit that because it's only the did that really occurs in the real world that you need to have a good compression for and the data that doesn't really occur in the real world even though they are also can be represented in principle bit strings you might not care much
00:15:57
00:16:24
957
984
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=957s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
about how much that kind of non real-world that gets compressed so for the remainder of this lecture we'll want to look at is a couple of things first thing we want to look at coding off symbols so we'll start looking at okay what does it mean to actually have a compression system and this will only comment with a method called Huffman coding that is actually used in many many of today's systems and it's actually quite intuitive very simple way to understand how compression could effectively work so I'm going to look at
00:16:24
00:16:54
984
1014
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=984s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
some theoretical limits and from there we'll look at some additional consideration for coding that will help us a bit more than what we get from the simplest version we cover first from there we'll tie this into things that we've covered in this class we'll look at our aggressive models will look at Vee we'll look at flow models and try to understand how these models can be leveraged to do better compression all right let's get this started so here's one way of coding information all right a way and just to be clear there's
00:16:54
00:17:38
1014
1058
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1014s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
no compression and this wave coding so ASCII is a system that for everything that's on your keyboard will assign seven bits so every character you can type can be represented with seven bits so two to seven possible characters could be represented this way what's nice about this very easy for kody and decoded there's a very simple one-to-one mapping always going to the seven bits for that character and back out to the character but if you encode this way you're not exploiting its disco parents is Nagano it's not compressing your
00:17:38
00:18:15
1058
1095
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1058s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
information maybe some key strokes are far less likely than others so maybe the ones that are less likely you should allow it to use more bits and the ones that are very likely you should try to represent with a very small number of bits and overall you might have a win that's the intuition behind law compression schemes but obviously here everything seven bits that's not going to happen let's in at least a reference as a starting point so we'll need a spare be length codes codes down assign different lengths depending on how likely a symbol
00:18:15
00:18:47
1095
1127
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1095s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
is how do we avoid ambiguity a simple way to avoid ambiguity when you have variable lengths is when your fix times is very easy first seven bits per character next sum of its next character December's next target but if it's variable length how you know a character has been fully translated and now the next one is starting one way to do this is to ensure that no code word is a prefix of another code word so as you see bits come across the line at some point you'll have seen all the bits for some letter let's say and because no
00:18:47
00:19:22
1127
1162
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1127s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
code word prefix of another one at that point you know there's nothing else that can continue from this this is the thing is the complete thing sent across and the corresponding character can be decoded so another thing you can do but that might be consuming more bandwidth to space you could also end the stop character to each codeword the Morse decoding does this but this might be a little wasteful we can have a general prefix-free code and we'll look at that very soon so let's look at Morris first so in Morse code in what happens this
00:19:22
00:20:03
1162
1203
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1162s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
very old coding scheme this is where I'm back when you want to send let's say you just have a effectively a line the communication line over which all you could send was let's say voltage going on and back down you can make it go up briefly or go go up for longer for three times as long so a dot is a brief spike in or up in your voltage let's say and then a dash is three times as long and then the spaces in between the dot and dash are also encoding that it's it's quiet the reason it's a quiet time in between
00:20:03
00:20:39
1203
1239
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1203s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
and then between characters there will also be three so between total of three units and then between words there will be Sun units of quiet time so that way you can encode every character alphabet all numbers and all I need to be able to do is send dots and dashes or short and longer signals and pauses be able to get everything across and people use that before before telephones time to go up Telegraph where they could send information across this way so some of the things you can already see here the letter A as a relatively short encoding
00:20:39
00:21:17
1239
1277
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1239s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
same for e same for I and that's because those are frequent frequently used letters and then things that are less frequent like maybe a Z has a longer encoding what else is left frequent there J longer encoding essentially well more or less to the letter sound you get a lot of points for in Scrabble have the long encoding skew here X over here and the letters that don't give you much points in Scrabble have these shorter in CONUS because there's many more words that use them okay that's that's a very specific thing the more
00:21:17
00:21:57
1277
1317
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1277s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
general thing that people tend to use a cynical prefix-free codes which can be represented as binary tries or binary trees so what is a binary try or tree it's a tree where one thing's split you hard-code ahead of time that the left will be a zero the right will be a one and so you can build a tree and you don't even have to put the zeros and the ones on it that I'm putting on here because you always know those side is a zero the right side is a one that says their specific type of data structure the reason it's spelled with ie is
00:21:57
00:22:36
1317
1356
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1317s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
because comes from retrieval it's a data structure for easy retrieval of certain information so that's also why it's often pronounced binary trees because it comes from retrieval at the same time there's also threes this way that also our data structure and so it can be a little confusing if they're pronounced the same way so some people will still pronounce this as trust to distinguish from trees so that's a binary tree the way we're going to use it is that the symbols will always live in the leaves and a code word is a path from root leaf
00:22:36
00:23:08
1356
1388
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1356s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
so let's look at some example here's an example we have a code word table we have one two three four five six characters each character has an encoding with a sequence of bits of sometimes only one bit and you can see that there's a crisp honest adapt in this binary tree where all the characters are sitting in leaves of the tree because every character sitting in a leave of a tree what it means lets him I'm getting some message across say I'm receiving this message over here I'm receiving a zero what will I do I will
00:23:08
00:23:54
1388
1434
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1388s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
go down this path I'll say oh I hit a leaf that means I'm at the end nothing left to go ID code an A then I get this one over here we just restarted means good this way I get another one you this way get another one Musa go this way get another one means go this way I hate to leave I know I'm ready to decode and it's a B and so because all the symbols live in the leaves I always know when I hate to leave what's him I need to decode and then come back to the top to start decoding the rest a message that's coming in now
00:23:54
00:24:30
1434
1470
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1434s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you can of course ask yourself the question for a given set of symbols that you want to send are there multiple binary trees and in fact there are there are many many trees you could put forward to come up with a podium for these six symbols here is another example here the tree is set up a little family and we see them the same string being compressed twice and on the left it requires 30 bits on the right it requires 29 bits and so the name of the game here is can we find a binary tree set down as I tried to encode my message
00:24:30
00:25:06
1470
1506
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1470s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
my bitstream sir as I try to put my original symbol message into a bit stream that a bit stream is as short as possible and race where you could search over all possible binary trees but there would be many many many binaries and then apply decide which one is most efficient but we'll see better schemes than that but you have a very naive way to put is just try all possible binary trees that have the symbols at the leaves and just see for each one of them how long the bitstream is and take the best one we'll see an efficient method
00:25:06
00:25:38
1506
1538
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1506s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
to get very close to that I should we get to the optimal one okay so the efficient method to find the optimal one without needing to do that exhaustive search I just described is something called Huffman codes and right now but we'll cover how Huffman codes work procedurally and then later once we've seen a bit more foundation on information theory we will also prove the fact that they are optimal but for now we're not going to yet prove that there are more you're going to look at the procedure okay so how does it work
00:25:38
00:26:18
1538
1578
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1538s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
Huffman algorithm said they're very simple consider the probability P I of each symbol I that's in your input so you have maybe a belong text file if you're encoded characters you would for each character do a count and then see what's the probability for each character to appear once you've done that you can start with one node corresponding to each symbol so for each of these symbols you have a node so starts as a disconnected tree just a bunch of separate Leafs really but not really connected up to anything yet and
00:26:18
00:26:52
1578
1612
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1578s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you associate with it a weight P I which is the probability of that symbol from there you repeat the same process over and over until it's all connected together in a single tree what is this process you selected two trees with min probabilities P kmpl initially when each symbol is its own thing what that means is you find the two symbols with the lowest probability later on once you've done some merges it'll be the trees that at the root have the lowest problem then you merge those two into a single tree with associated probability as the sum
00:26:52
00:27:32
1612
1652
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1612s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
of the original probabilities and that's it that's all I need to do so let's take a look at an example of how this works on some example data here we have six symbols each symbol has its own probability associate with it and so let's step through what Huffman coding does we have six symbols each your own probability we have a with a probability of zero point two we have B with probability zero point one C per building 0.05 T with probability zero point two one e with probability zero point three six and F with probability
00:27:32
00:28:15
1652
1695
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1652s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
zero point zero eight let's follow the algorithm what a to lowest probability thinks it's C and F so what do we do we connect them up CNF get connected up and together they have the sum of the probabilities which is zero point one three what's lowest in probability now it's the zero point 1 here and the zero point 1 3 over here so we'll connect this up and the top here now is probability 0.23 what's lowest now we have a zero point two and a zero point two one let's go point two one is somewhat inconveniently
00:28:15
00:28:53
1695
1733
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1695s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
located so I'm going to move over here so be zero point two one and moving it off to the side zero point two one and DNA connect together for a zero point four one what are the two lowest now is the zero point two three dots here and a zero point three six over here is 0.36 is incomming located I'm going to relocate e over here all right then connecting these and then together they have zero point five nine the two lowest ones are the only two left is zero point or one and the zero point five nine and here is our Huffman encoding and then
00:28:53
00:29:45
1733
1785
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1733s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
what we do is like the left side is 0 and the right split is 1 there we go and now we have an encoding we want to know what is D D is 0 0 what is a a is 0 1 what is B B is 1 0 0 1 0 0 what is C 1 0 1 0 E is 1 1 M F is 1 0 1 1 and to say uniquely decodable code every symbol once you have sent the bits across you hit the leave of dizzy coding tree you know you've got an entire symbol and then you start again at the top of the tree to decode the next symbol so we haven't covered why this is optimal but hopefully the procedure is clear that is
00:29:45
00:30:41
1785
1841
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1785s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
a relatively simple procedure they can do for any symbol table that you have and it relies on these probabilities and you might already start foreshadowing here chorus this is what your narrative models might be handy very good generative models might allow us to build good probability estimates that we can then use to find a real good encoding because of course if these probabilities are wrong then this tree will not be a very good tree for encoding the data so here's the same another example that you can work
00:30:41
00:31:17
1841
1877
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1841s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
through solving another example and especially under many of the applications that are used on the internet Huffman codes are used to compress their data all right so maybe let me pause here for a moment to see if there's any questions so have you any questions feel free to type them into the chat window or to just speak up or raise your hand oh hi drew yeah I had a question so I guess something that I noticed about Hoffman codes and stuff would be that you're kind of the number of symbols or number of values that you have is fixed
00:31:17
00:32:13
1877
1933
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1877s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
but if you're trying to let's say encode more complex data structure so if you have something like you know maybe maybe images have like a fixed dimensions to audio can be multiple dimensions for example so is there a way other than discretizing or is the notion just to make chunks and then compress them effect sized chunks which take discrete values yeah very good questions so chunking is between an option and then just send over in chunks which by the way often can be desirable for other reasons also even if you had a fixed
00:32:13
00:32:51
1933
1971
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1933s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
size thing but you wanted this let's say you had a video you wanted to watch on home if somebody first encodes the entire video sensitive crosses one file and only then you can decode and play it's not great you want to be able to stream it across so you have there are reasons to chunk where you're actually for bill optimality of compression but you reduce latency and getting things across we will look at some other codes a little later and it's a really good question that actually fits very well with her describing so the coding systems we'll
00:32:51
00:33:23
1971
2003
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1971s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
look at later are thematic coding and asymmetric numeral systems are able to encode streams in effectively a continuous way such that if the stream is longer it can keep encoding and it just on-the-fly continues to encode as you go along now in practice often people will still chant and stop at some point because otherwise you might have to wait too long before you can decode sometimes but in principle they can work with arbitrary length and not knowing the length ahead of time so we'll cover that but you're
00:33:23
00:33:57
2003
2037
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2003s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
absolutely right for Huffman codes do is that strong assumption that you have an alphabet of symbols and you build it encoding for a doubt alphabetical and you don't encode that specific symbols make sense thank you well compression usually be done in terms of bits like the encoding will be the output of the encoder will be like a lookup table and it won't you say yes the way we think of compression and the way it's ultimately done on computers is that only what comes out is the sequence of bits you can think of a single bit as
00:33:57
00:34:39
2037
2079
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2037s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the sense of the minimal unit of information like a single bit can either be 0 or 1 and sometimes the minimum that you can send across in terms of information just an outer 0 or a 1 because there's two options you can send information across if you have only one option while there's nothing you can do there's no information be transmitted so in fact often as we'll see the way information gets better than the size of well the amount of information your system gets measured is by bits the mineral number of bits required to
00:34:39
00:35:11
2079
2111
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2079s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
represent the original is the amount of information in that piece of data I say one quick thing to add maybe it is the case that when you actually transmit over certain lines that are not let's say a computer storing zero on bits there are transmission schemes where you send maybe two bits in one go to using potentially something close it to a continuous channel that you then discretize on the other side to get out several bits in one go so that was happen also under the hood but in terms of kind of the information theoretic
00:35:11
00:35:49
2111
2149
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2111s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
properties we tend to think of it as turning everything into a sequence of bits alright great questions thank you let's move to the next part which is threat of the limits and what we're going to cover here to me some of the most beautiful math than any discipline has to offer somehow well we're going to cover we can cover in just a few slides we've been covering quite comprehensively in just a few slides and get very kind of deep throughout up on inside to guarantees across so very excited about getting to talk about that
00:35:49
00:36:38
2149
2198
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2149s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
day so one thing you might have heard in the context of information fear this thing called entropy DeShannon a sort of a measure of information so what is entropy by definition so it's just a mathematical definition not talking about properties yet entropy of X what is X X is a random variable right and so X is just really some distribution P of X and so whoo I measure entropy till the entropy of the random variable not a specific institution of that variable but anyways the entropy of the distribution or entry of the random
00:36:38
00:37:11
2198
2231
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2198s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
variable as a whole and this is definition use sum over all possible values the random variable can take on and you take a weighted sum of the probability of take on that value and then log 2 of 1 over P exile ok so this might seem look a little coming out of water but maybe let's get a little bit of intuition for this of why this might be a meaningful way of how you measure entropy which is the amount of uncertainty you have in a distribution and hence there's a lot of uncertainty about the random variable you need more bits to send the cross we
00:37:11
00:37:49
2231
2269
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2231s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
hope to tell the other person what the outcome us so there I have been unbearable I read an experiment I see the outcome of that random variable I want to communicate the outcome to you how many bees do I need to send on average and this effect doesn't just talk about that but it also then kind of hints at an encoding scheme it kind of says the number of this you can I use for a outcome X is going to be log to 1 over P X I so let's look at an example distribution here's an example distribution random variable can take on
00:37:49
00:38:24
2269
2304
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2269s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
five values and will compute the entropy of this thing so it isn't like 1/4 1/4 1/4 1/8 1/8 and we can compute the entropy entropy is 2.25 then let's look at another distribution much more peaked beautiful these are three quarters and then 1/16 for everything else compute the entropy its 1.3 so 1.3 2.25 entropy is a lot larger on the left and on the right why is that because if I run an experiment on the random variable on the left and then want to communicate the outcome Q actually there's many possible outcomes that are pretty likely and so
00:38:24
00:39:03
2304
2343
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2304s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
it's not like you can come over to very efficient encoding scheme because you need to encode everything pretty much with some reasonable probability you're gonna have to send it across whereas here on the right what happens is this first outcome over here scroll ugly so if you encode that first outcome with a very small number of bits then most of the time you have to send almost nothing and then yet sometimes you have to send more bits to get the other things across but most of the time is very cheap and so that that's effectively what's going
00:39:03
00:39:34
2343
2374
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2343s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
on in this equation and for now a building intuition will make this a lot more formal very soon so let's take a look at another example so think back to think back to our binary trees to encode some set of symbols we have symbol a b c and d if these are the probabilities 1/2 1/4 1/8 1/8 then this thing over here is a optimal way of encoding that have the time you sent just one bit for a the other half of the time you got to cover the rest and to say that you're covering the rest you have to send the one then
00:39:34
00:40:20
2374
2420
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2374s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
in the other half of the time half the time you have to send what it say it's be in the other half of the time you have to signal with the one that it's one of the other two and then at the end you decide which one it is when you're down here so what we can this even though we haven't proven this intuitively it should make sense that this is a very good scheme for encoding this kind of distribution over symbols because you can't say anything less than one symbol for a otherwise you have not communicated anything a is the most
00:40:20
00:40:53
2420
2453
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2420s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
frequent symbol I'll have to do is send one symbol and for being well you know if the first signal it's not a and then you send one more something symbol to communicate it to be similar than for CMD this encoding scheme over here uses a length that is the to log of 1 over P of X I and so you could imagine a world if in the in your world every probability associate with any symbol you have some symbol X I and the probability P of X I is equal to 1 is equal to 1 over 2 to the length of sorry no let's see if ya the paralytics ID can
00:40:53
00:41:33
2453
2493
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2453s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
be expressed as 2 to the power L I then you can encode using the same scheme that symbol X I with a length L I bitstream into a tree that would be built up the way the tree was built up over here haven't threw in this but that's kind of the rough intuition and we'll see is of course things that generalize this to simple we're P of X is not necessarily one over two to the power of something that could be any probability different from a power of 1/2 okay so that's some high-level intuition let's now take a look at some of the theory that we can
00:41:33
00:42:09
2493
2529
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2493s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
put down first main theorem is the Kraft McMillan inequality what it says is that for any uniquely decodable code c okay so this is somebody tells you I have a code it's uniquely decodable and if it's not uniquely decodable you can't really use it to do lossless compression so codes do need to be uniquely the code well unless we're not going to consider them for losses fresh when it comes up with a uniquely decodable code see what does that mean it means a mapping from symbols to bits bit strings corresponding to each symbol if it's
00:42:09
00:42:52
2529
2572
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2529s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
indeed uniquely decodable then it's the case that this property holds true what is this saying it's saying for each symbol and corresponding ain't coding so the word the bit word over you we can look at the length of the encoding and there's some property satisfied by these lengths so funny and somebody's your table symbols and bitstreams it's gonna be a and some bits string here then be some other bits here and so forth you give you a code that's uniquely decodable then the lengths that you encounter here will satisfy this
00:42:52
00:43:30
2572
2610
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2572s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
property this says it's smaller that one has to be smaller than one what does that mean these are negative powers here so it's effectively saying that the lengths have to be large enough they always going to be of a certain length otherwise it's not be satisfied so swimming out this thing is saying if someone is uniquely decodable code I can guarantee you that the encodes have to be relatively long they cannot be shorter than a certain amount because otherwise they would not satisfy this property with Boston
00:43:30
00:44:05
2610
2645
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2610s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
actually held in the opposite direction opposite direction says if you have a set of length L that satisfy that same thing then there is a code you can build in fact the prefix code which is very convenient to deal with which is uniquely decodable of the same size as these links so it's a back and forth kind of mapping if something uniquely decodable this is satisfied if don't satisfy this you can build uniquely decodable code that in fact the prefix to a tree that allows you to encode symbols with feet word lengths what does
00:44:05
00:44:44
2645
2684
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2645s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this mean this means that since there's a party that holds true for any usually decodable code someone can give you include the decodable code this property will be true when this profit is true there is also a prefix code with the same lengths so it means that we never need to resort to anything but prefix codes if my said have a very clever scheme to make the bitstream uniquely decodable you might have to look ahead and look and look at many places of decode you can say no need i can use the same encoding length and build a prefix
00:44:44
00:45:22
2684
2722
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2684s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
code that will have the same efficiency as your other uniquely decodable code which will be more troublesome to decode so we're going to strict attention to prefix codes all right so what's under the hood here let me give you a quick brief sketch one direction for any prefix codes d and that's kind of a subset of what's on previous slide for our prefix codes C we have down this is satisfied that's what I stated what's the sketch order all the lengths of your code words then setup we prefix code right so we have all these
00:45:22
00:46:06
2722
2766
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2722s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
lengths favorite prefix code who a previous code we can build a tree look at a tree we can look at the tree will actually end initially over here at those red dots because that's where the code words are but we expand the tree to be of equal depth everywhere so even though maybe your symbol a would be encoded here you continue because you want to make it all equal death then after you've done that you can do a simple count you can say each code word for example this one over here how many leaves are covered by it well the whole
00:46:06
00:46:45
2766
2805
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2766s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
trees of depth 4 so the whole thing is depth for it is at depth - so what's under here is due to the four bonds to leaf nodes under here we have 2 to the 4 minus 1 as 8 leaf nodes living here and so forth so every Congress there's any leave of the expanded tree since it's a pretty code let me know overlapped it's a clean tree so the total number of leaves will be in this case 2 to the 4 or in general 2 to the L + 4 n Ln the maximum length code that you're considering so the opposite equality here that the number of leaves
00:46:45
00:47:35
2805
2855
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2805s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
covered is smaller than the total number of leaves you could have in a tree you just divide both sides by 2 to the L 1 and you get this thing over here so not too hard to prove the details of the proof don't matter too much but it can be done in one slide that's the first part how about the second part second part says for any set of lengths if this is satisfied then we can build a prefix third tree with those lengths how is this done you consider a full tree of depth L M which is the longest length and for each eye you pick any node of depth li
00:47:35
00:48:19
2855
2899
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2855s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
still available so you're going a tree say a depth of Li is anything still available okay I picked this once you pick that you consider everything below it average so this is all done at this point nothing below there is available anymore this will consume tuna alin- li leaves of the expanded tree you can you know that as you count together how many leaves you're going to cover in this process is going to be this many on the left here we're told that this thing on the top holds true which means that this is smaller than 2 - L M which means that
00:48:19
00:48:59
2899
2939
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2899s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we can fit this inside a tree so we have be able to fit all the code words inside a tree okay so two quick proofs we don't need to know to be able to prove going forward but I wanted to get across it these are actually relatively simple to prove consequence from this is probably something you've heard many many times and that will now be able to prove very very easily then for any message distribution P of X there's some distribution over symbols then you have an Associated uniquely decodable code see then the average encoding length so
00:48:59
00:49:37
2939
2977
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2939s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the expected length of your code we encode a symbol will be always larger than the entropy of the distribution there's a janitor in 1948 entropy is lower bound on how many bits need to encode symbols coming from a certain distribution so let's step through what the key things are to get there this is just what we're starting from the difference between entropy and exit code length entropy is this thing here expected code did this look at all possible symbols and look at the length and take the weighted sum we have px I
00:49:37
00:50:15
2977
3015
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2977s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
here P of X I over here we can come bring this together we have this over here then to bring us more closer together we're going to say well L I equals log of 2 to the L I we have difference of two logs we can bring out together the log things behind multiplied together or divided by each other if you have a negative sign so they've been negative appearing here then what is this thing over here what are we doing we're replacing let me expand it we're essentially replacing it with by bringing this thing over here
00:50:15
00:50:57
3015
3057
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3015s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we're the expected value of the log of something to make it into log of expected value that's Jensen's inequality we've seen that in variational auto-encoders see it in many many places in machine learning we just applied Jensen's inequality expected value of the log smaller than log of the expected value and noise brought along we have the expected value of the log is over here whereas the log of the expected value is above and so we have just an equality applied here how about the next step we do this is going to be
00:50:57
00:51:36
3057
3096
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3057s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the craft McMillon in a quote assess if we have uniquely decodable code and this thing over here has to be smaller than equal to one and then log of one is zero and we're done so to prove Shannon theorem all we needed is Jensen and then craft mill and inequality and we're good to go we have the full proof let me maybe pause here who's a pretty big result and see there's any questions all right so at this point we've proven that and uniquely decodable but anybody can come up with with certain lines know the code words you can use a prefix code
00:51:36
00:52:30
3096
3150
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3096s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
if you want to so that makes it very convenient and also that it's never going to be better than the entropy and expected encoding length for she might have next n as well how close can we get to entropy can we find the code that achieves that achieves H of X or it close to it because if we can and we know we're doing optimal okay so here's one way to think of it expected code length would be entropy if we take the lengths of all of them exactly this thing over here on the inside the coding is log to 1 over P of
00:52:30
00:53:08
3150
3188
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3150s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
X I we're good to go now I'm practice I might not be a natural number so you might have to round it up to the nearest natural number to actually make it a bit sequence so this is essentially n2 between Shannon coding so how about we proposed this we're going to try to encode with this thing over here the first question you should have is that even possible is this a valid set of lengths or would this be lengths that actually will not correspond to a curve well I'm gonna think craft of billing allows us to check for a given set of
00:53:08
00:53:46
3188
3226
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3188s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
lengths is there a code that corresponds to it so let's check Kim can we find a code that matches up with this well the this thing over here is the thing that we have on the Left sent hand side into credibility and equality and we want to hopefully prove that this is smaller than one so but trying to prove it's smaller than one we have to make the steps to get there if we can prove this then it means that code exists and we're good to go we can actually means looking to enter the coding so this thing is equal to the code lengths are given by
00:53:46
00:54:25
3226
3265
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3226s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this quantity over here so just then this is more than or equal is because we're running up here and we're getting rid of the rounding up but the rounding up is happening in a negative exponent so by getting rid of the rounding up we end up with something bigger then this thing is easy to simplify to to the log to of something is just something that's what we have here now some of the probabilities is equal to one and we are good to go we have them 2 to the minus L I sum over I saw you go to 1 which we know from
00:54:25
00:55:04
3265
3304
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3265s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
Kraft McMillan implies exist a prefix code dad worked with the links so we now know that we can do entropy coding so be an alternative scheme to Huffman coding right we would have encoding the bill the treaty here would be you look at the probabilities of all your symbols and then you assign the length and then you still define code words that match up with it but assuming you can run some search or some other item to find those code words you know they exist so you just need to find them and then you're good to go
00:55:04
00:55:43
3304
3343
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3304s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
how is this well there is there's a little derivation we can do that this is very close to achieving entropy so look at this over here what's expected length weighted sum of the lines fill in the lines then what is this thing over here well this length over here is rounding up so it could go up by one relative to the real number that's on the inside that's just the one plus over here once you have that in simplified it the one comes up from does summed over all pxi and then in the back here we have entropy and so we have 1 plus entropy so
00:55:43
00:56:25
3343
3385
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3343s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
expected length is entropy plus 1 so this is pretty good we here have discovered out not only the best thing you can do is entropy coding is the entropy in terms the number of expected bits but also you can use directly log 2 of 1 / PXI as with designated coatings and if you do that you're only one away on average from the optimal encoding now the one thing we haven't covered yet in this whole scheme is how do you find that encoding we now know that we could do entropy encoding we know that this will be close to optimal
00:56:25
00:57:05
3385
3425
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3385s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
but running a massive search coming tutorials face might not be that practical check out Huffman codes can achieve the same optimality and we'll show that now so by induction on the number of symbols in our code book so a number of symbols M by induction meaning that in the proof we'll assume that if we have to encode only n minus 1 symbols and we use the Huffman encoding scheme we would end up with an optimal prefix code for those n minus 1 symbols and now I'm going to show that under that assumption it's also true for M and of
00:57:05
00:57:43
3425
3463
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3425s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
course with only two symbols or one symbol wherever you want to start it's clear Huffman codes are optimal so we're good to go okay this is actually a little intricate but it's not too long Huffman coding always looks at these lowest probabilities symbol so we'll start there we look at the two lows for those symbols X and y there's always going to be two lowest probability symbols mater is a tie but that's fine they arbitrarily break ties and be the two lowest probability symbols in your original code book optimal prefix codes
00:57:43
00:58:20
3463
3500
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3463s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
will have two leaves in elos double branch why is that you have a prefix code if symbol here maybe a symbol here some more symbols here in the lowest level branch which is this one over here there are two leads in higher levels that's not always true here that's not true here that's not true but the lowest level it's always true why is this always true imagine you didn't have two symbols left anymore only had one symbol after you didn't have this one here what do you do you would actually get rid of us all split here you put C up here and now
00:58:20
00:58:59
3500
3539
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3500s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
it's gonna be true again if it's not two symbols at the bottom you only have one we just a bit more okay so at the bottom there is always gonna be two leaves at that lowest branch then without loss of generality we can assume that symbols x and y have the same parent does I have to meet a kids just bill imagine your tree looks like this it can be an X's here y is here and there's a Z here and a W here could be it they don't have the same parent but because they're the lowest probability symbols they will always sit at the lowest lowest level
00:58:59
00:59:38
3539
3578
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3539s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
and at the lowest level you can interchange where they live and you can always make x and y appear together and put W over here and so there now have the same errand it's effectively the same code you just move things around at the bottom so x and y have the same current then every optimal prefix 3 will have x + y together at the lowest level with the same parent so that's what we're out now the steps we've made allow us to include this line over here no matter the tree structure the additional cost of having x and y rather than just
00:59:38
01:00:22
3578
3622
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3578s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
a simple parent symbol Xena needed sending that with my n minus 1 symbols now this M resembled x and y the extra cost will be px plus py y is down the number of times you have to go down that extra level in a tree to reach x and y is P of X plus P of y if you only have to go to the parent of x + y you wouldn't have to go that extra level and whenever you have to go to extra level it cost you one extra bit it happens P of X plus P of y fraction of the time now the end symbol Huffman code tree adds this minimal cost to the
01:00:22
01:00:58
3622
3658
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3622s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
optimal n minus 1 symbol Huffman code tree which is optimal by induction so this here it's a final part approve it's saying no matter what tree you build you'll always pay a prize of P of X plus P of Y when you need to split on x and y you can't just get away with apparent Z that's unavoidable the Hoffman code tree will have them appear that way together so the Hoffman code tree is incurring the minimum possible cost for being a n symbol tree prison minus one symbol tree it's having that minimal cost to when
01:00:58
01:01:51
3658
3711
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3658s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
it's built so far which is a 10-1 symbol tree which we know is optimal by induction and we're good to go alright so click recap of everything we covered entropy is the expected encoding length when encoding each symbol with this length and so there's the equation for entropy atmosphere for 1948 so that every data source P of X is an order 0 Markov model which means there's no dependencies between symbols like that you are accounted for or able to come for then a compression scheme that independent codes each symbol in your
01:01:51
01:02:31
3711
3751
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3711s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
sequence must use at least entropy bits per symbol an average Huffman code is able to do that with an overhead about most one how do we know that because entropy coding doesn't leave an overhead up at most one and we prove that Huffman codes are optimal so given entropy coding has an overhead about most one Huffman codes provide a constructive way of achieving something that also has an overhead about most one beyond the entropy cost any questions I have a question so for the competition you mentioned in the
01:02:31
01:03:25
3751
3805
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3751s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
beginning 500,000 euro competition if we just take the photo and yeah if we compute the entropy of that file that's provided that will provide like the minimum number of bits right if we just turn that can we just turn that to see if it'll be like 116 megabytes like that should would that give like a lower bound on what can be achieved so yeah that's a really good question and that gets exact what you're getting at is a in some sense exactly this thing over here so for now we've assumed the order 0 Markov model so what that
01:03:25
01:04:07
3805
3847
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3805s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
assumes is done let's say that is a sequence of symbols let's say there's only 26 letters and nothing else in that file of course there's other symbols too but you could just look at the frequencies of each of those letters you could then look at the entropy encoding or look at the entropy and say okay this is the entropy and now if I want to compress this by doing each of my 26 letters a bit sequence as its encoding what's the best I can possibly do I can actually find that number and you'll find that it is gonna be more than that
01:04:07
01:04:41
3847
3881
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3847s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
one in a 16 megabytes because otherwise somebody would have long done it the reason there is a hope to be able to do better and well we'll get into how to do this soon is down in reality these letters in that file are not independent when you see first three letters you might have an easy time predicting the fourth letter because there's only so many reasonable completions of that word now you already starting to saw the first three letters off and so then the calculus becomes a little different than we get that in a moment
01:04:41
01:05:11
3881
3911
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3881s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
and that's where things get combat is then all saying like oh it's not as simple as doing counting of trillions of each of the symbols you really need effectively a generic model that can understand how to predict the next law small previous symbols and start measuring the entropy and batma and then the question is how good a maliki you built and yes if you can build the world's best generative model to predict the next character in that sequence you look at the entropy of that then you might have a lower bound
01:05:11
01:05:40
3911
3940
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3911s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
roughly speaking on I mean - I think of a few details to be sure it's exactly true but they don't give you a pretty good estimate of what the optimal encoding might be and we'll look at a few examples soon like three slides from now we'll get to a few more things that touch upon exactly what you're asking about really good question other questions okay let's move on then so a couple of coding considerations we want to look at here what happens when your frequent accounts or maybe some more complicated estimate the distribution over symbols
01:05:40
01:06:36
3940
3996
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3940s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
is not precise you have an estimate P hat but really the tradition is P what's gonna happen with your performance of the compression scheme higher order models we pick the next symbol from previous Emal's how can that help you and what about that plus one didn't ask innocent as it seems or is actually very bad sometimes and what can we do about it so the expected code length when using p hat to construct the code is going to be expected code length but in reality our expectations with p is the way we encounter symbols is governed by
01:06:36
01:07:14
3996
4034
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3996s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
p for though your symbol is p i but the code length we assign is based on p hat a lie so then this is our expected code length simply don't need to round up to get no natural numbers for the encoding so simple calculation then we add and subtract the same quantity now this quantity over here in the front we recognize as KL divergence the thing in the back we recognize as entropy so we see the expected coloured length when we use a distribution estimate P hat is going to be the entropy plus we know it's always going to be more any
01:07:14
01:07:59
4034
4079
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4034s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
encoding is gonna cost you at least entropy may be more it's gonna cost you an additional decay all the versions in P and P hat so the price you pay is a cal divergence we know that the log likely objective when we learn a genera t'v model I should comes down to minimizing the KL divergence between the data distribution and the model that you learn and selectively when we're maximizing log likelihood we're minimizing this calorie version effectively trying to find a distribution now incur minimal overhead if we use it for an encoding
01:07:59
01:08:31
4079
4111
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4079s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
to encode our data notice two ways you can prove Cal is positive we can prove it because we know every encoding has to be hired Andrew could call it done this means that this thing is positive because we know that already or you could prove me from first principles using Jensen's inequality which is shown at the bottom here but so we also will pay a price corresponding to the KL divergence so the better our generic model is the better our encoding scheme can be and so when we think about including with generative models there's
01:08:31
01:09:04
4111
4144
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4111s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
really two things going on you want to somehow figure out a good encoding scheme but the other part is you want to do really well at this part over here which is a maximum likelihood estimator because that's going to help you ensure encoding scheme is actually good on the data now what if P of X is high entropy if P of X is high entropy that would gain a very long code length which you might not like you might be able to decrease the entropy by considering conditional entropies if you condition X on the context let's say what has come before
01:09:04
01:09:41
4144
4181
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4144s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
they may be able to reduce the entropy in fact this it's easy to prove that the conditional entropy of X given context C is always smaller than the unconditional entropy H of X in fact auto regressive models do exactly this in an automatic model you predict the next symbol based on everything you've seen so far and often the prediction of the next symbol or the next pixel is gonna be a lot more lot easier to predict so a lot lower entropy than just independently predicting each pixel and so going back to the price thing that we were talking
01:09:41
01:10:14
4181
4214
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4181s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
about effectively this is saying that if you don't assert each symbol independently but you train a conditional distribution maybe well you will not you should not do worse and likely you should do better then come then when you would do it with each independent symbol being encoded separately all right how about the +1 Matt seem pretty innocent no entropy is optimal n to be +1 why not up up their price of 1 let's look at an example and let's look an example where I might I should be pretty bad and it's not going to be
01:10:14
01:10:53
4214
4253
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4214s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
uncommon if we have a good predictive model which makes H of X very low then it could be very high overhead for example our distribution over symbols the three symbols very peaked mostly on the first symbol because we can predict the next letter maybe very easily the next pixel very easily we're very peak distribution 90% of math then 5% 5% the entropy of this thing is roughly 0.5 but it will pay a penalty of +1 and in fact we send us a lot of time each thing is gonna cost us at least one because sending a bit sending anything across
01:10:53
01:11:37
4253
4297
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4253s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
will be at least one well I should pay a price that's pretty high so here's the optimal code for this we could use just year over 0.9 and then 1 1 1 0 expected code length will be 1.1 so we're going to pay a price here that's actually pretty big that almost twice the length of the code compared to what entropy cover what entropy predicts has the lower bound so this first one gets expensive you send their long sequence of symbols essentially sent twice the sequence length compared to what in principle you wish you could be getting
01:11:37
01:12:14
4297
4334
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4297s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
how can we get around this let's take a moment to think about that anybody any suggestions could you use larger trunks can you use larger chunks exactly why would you care about larger chunks the reason your this price is expensive the plus one is expensive is because when you only have three symbols where you send one symbol you still need use at least one bit but one symbol doesn't have much information in it in this case very little information it is deceiving the first symbol if we send multiple symbols in
01:12:14
01:13:05
4334
4385
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4334s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
one go let's say we turn this into a distribution over we have three symbols ABC chosen distribution over there could be a a a there could be a a B there could be a a C and so forth now we have 3 to the 3 is 27 possible combined symbols that we're trying to send not which friend this will work out a lot more nicely and the overhead will become a lot less than if when we try to send just one symbol of time so let's take a look at this in action one way people do this in actually sending faxes well Tom there keep you have used faxes but essentially
01:13:05
01:13:51
4385
4431
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4385s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
before there was emailed with something called faxes where you could send documents over a phone line and the way it was encoded was by Senshi he's sending he naively would be uttered is fix less wide or block and as you step through the entire page naively you have to send one bit per pixel white or black very expensive because usually actually it's going to be a lot of light in a row or a lot of blacks in a row so you can instead encode it as the number of whites then the number of blocks number of lights it's called a thin coating and
01:13:51
01:14:24
4431
4464
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4431s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
that's what they came up with so what are your symbols now your symbols are I run off I run off let's say one one I run off two whites and run up three wise one of four wives Sam for black you list out old possible run wings that you might want to care about encoding and then you can look at the probabilities of each of those run lines and then build a Huffman code and then you get the encoding so you're going to be using and you get a very compressed representation of that page that you're trying to send across even then also has
01:14:24
01:15:03
4464
4503
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4464s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
a question about the English language how much entropy is there in English language and people have done this experiment so question here is was the entropy conditional entropy of X let's say X a position and given X 0 through X and minus 1 how predictable is the next character so Shannon ran his experiment and he concluded that the English language is only one bit per character so if you train a conditional model that predicts the next care could give it everything before you can get an entropy of 1 bit how do you even figure that out
01:15:03
01:15:37
4503
4537
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4503s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the way you did it is you actually ask people to do completions so you would say okay here's the beginning of some text now predict for me the next character and then the person predicts a character and then me Shannon would say right or wrong it is right you're done if it's wrong you get to guess again 79% of the time people get it correct in the first guess 8% of time it takes two guesses 3% of time takes three guesses and so forth whenever you get communicated back whether your guess was right or wrong effectively one bit of
01:15:37
01:16:20
4537
4580
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4537s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
communication was communicated about what the underlying character is and so you can just stick the weight of some here you'll see that lands are roughly 1 which means that you need one bit of information for character on average people have not gotten to that by the way I mean that depth of at least ball automatic fix compression schemes have now gotten to that lower yet but things are getting closer and closer over time so looking at practical schemes if you use just 7-bit encoding fixed encoding well then you have seven bits per
01:16:20
01:17:01
4580
4621
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4580s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
character if you use entropy encoding off individual characters of your two to the seven this is 128 or simply 128 characters you use entropy encoding you know I need 4.5 bits per character that's of course the ball it is if you look at entropy but you can't perfectly achieve that because you have to brown to a finite number of bits so - code which is optimal achieves four point seven now if you look at the entropy of groups of eight symbols and then look at the average entropy per character you can line up 2.4 now some
01:17:01
01:17:43
4621
4663
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4621s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
thought is good asymptotically this goes to 1.3 so what you want to do instead of encoding one character at a time when I maybe could eight characters at a time and huffing code will be something probably slightly above 2.4 for that employee ok I propose we take a five-minute break here and when we start at 625 and we'll start looking at how some of these ideas can tie into the generative models we've been studying you alright let's restart any questions before we go to the next talks alright then let's take a look at how we can
01:17:43
01:23:43
4663
5023
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4663s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
combine all the rest of models which we covered in one of the first weeks of this class with coding the key motivation here is that we want a flexible system to group multiple symbols to avoid the potential +1 over head on every symbol including going back to Jerusalem who long this thing is going to be and want to be able to encode that on the fly so question we might have is how many symbols which symbols to group in a naive system that's what you'd have to do yes okay how many similar I'm going to group in
01:23:43
01:24:31
5023
5071
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5023s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
groups of 3 or 10 or whatever and then make some decisions about how to grid them Ciotti hue is actually done but we no need to decide on how many symbols or which symbols win we're doing this for we're going to encode every possible symbol sequence by mixing into a distribution it'll show an example very soon and this works for training over there and is extremely compatible with all aggressive models so let's take a look let's take a look at an example we have a alphabet with two symbols a and B probability of a is 0.8 probability of E
01:24:31
01:25:16
5071
5116
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5071s
https://i.ytimg.com/vi/p…axresdefault.jpg