video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
is 0.2 if we individually encode this we'll have to maybe send the 0 for a and a 1 for B and is gonna be a lot of overhead because a cost us just as much as B even though it's way more likely really there should be a way to make it cheaper the second most naive thing would be only talked about earlier gives you a head of time decide what is 3 a is going to be three B's two A's and a B and so forth we're gonna do something quite different we're going to do is we're going to say okay we get something to come in a a PA
01:25:16
01:25:51
5116
5151
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5116s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
let's say first and code the first symbol here a we have a distribution that's available to us the malls the probability of these symbols I'm gonna say okay 80% chance it's a 2% chance it's B so we're gonna actually map the fact that I have an A to this interval over here it means we landed you can think about the all possible random events that couldn't happen in the world is the ones that lie on the 0 to 0.28 interval that's the event that has happened then when the next a comes in we're going to take that interval that
01:25:51
01:26:28
5151
5188
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5151s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we're working with now 0-3 0.8 and say okay it's again an a so it must have fallen within the first 80% of that new interval then it's a B which means that's fall in the last 20% the last 20% of that interval there and then it's a a which again means the first 80% so end up with is let me take a different color that's more visible is a lot of green already end up with the notion that this string a ABA gets mapped to a very specific interval within the 0 to 1 interval and the way we do this it should be clear that is unique for every
01:26:28
01:27:17
5188
5237
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5188s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
string will have a unique interval he end up in and so that a different sequence we end up with a different interval the idea behind arithmetic code is that what we're going to communicate is the interval so way to communicate the this thing over here now we still have to decide how we're going to commute but that's the idea and you don't need to know I had I'm how long your bit string is or your symbol string is going to be because this interval one boom maps to whatever simple sequence you receive so just need to encode this and
01:27:17
01:27:55
5237
5275
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5237s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we're good to go if there were more symbols coming in if there was another B after this they would have split this thing again I would have had the small end of all was another a after this with him split this up a bit more and your the smaller interval and so forth so one-to-one mapping between simple sequences of arbitrary length and intervals okay how do we code an interval let's start with a naive attempt naive attempt of encoding interval so K represent each interval by selecting the number within the interval
01:27:55
01:28:31
5275
5311
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5275s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
which is diffused bits and binary fractional notation and you set up the code so for example if we had these intervals we could resent those with point zero one for the first interval point one for the second one and point one one for the third one because those are binary numbers that fall into each of those respective intervals it's not too hard to show you for interval size s so the width of the interval asked we have the most negative log two of us bits rounded up to represent such a number which is great because that means
01:28:31
01:29:09
5311
5349
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5311s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
that because the width here s s is really probability of the civil sequence so that's achieving entropy coding up to the rounding out the problem here is done these codes are not a set of prefix codes for example we have one here that we would send for a second symbol but after we receive one we wouldn't know did they send us a second symbol or was it the third symbol so the second symbol sent twice or the third symbol sent once there's no disambiguation and so this scheme while we might seem reasonable at first and it's efficient it actually
01:29:09
01:29:47
5349
5387
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5349s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
does allow you to decompress correctly so what else can we do we have each binary number correspond to the interval of all possible completions so for the above example when we say point zero zero it means the interval from zero to 0.25 will say point one zero zero it means the interval from 0.5 to 0.625 will see a point 1 1 that means interval from point 7 5 to 1 so we're gonna we're gonna want it to be the case that remember on the previous page any simple sequence that I want to send will result in an interval we want to send we're
01:29:47
01:30:26
5387
5426
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5387s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
gonna find a bit sequence such that when you look at the corresponding interval that we map to it which is given here by example that entire interval should fall inside end of all we're trying to encode leaving no I'm bagheera T which in the ball it belongs to and will not be a prefixed or anything else to work out the details of this it turns out you get an overhead of to possibly instead of 1 but that's actually pretty good because when we do this there's kind of arithmetic coding we can code arbitrary many symbols so the overhead of plus two
01:30:26
01:31:08
5426
5468
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5426s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
is only in count incurred once for the entire sequence instead of incurred for every symbol we send the Cross so it's a one-time overhead for the entire sequence that we encode this way obviously we'd like to avoid the plus two but it's not that bad any remaining challenges well sometimes when you file this scheme what'll happen is that the interval that you're finding as you go to your a be a be a and so forth sequence and you start from that interval from zero to one you might find that you know at some point you have
01:31:08
01:31:44
5468
5504
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5468s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this interval like this but zero point five is here just marking this next thing you realize oh you're actually here next thing you realize they be you're here the next thing that the way it works out is that you always end up with that interval that's centered around 0.5 if the case you're never able to send that first bit till your entire sequence is complete and so the solution to that is to even though in principle to minimize the number of bits you need to send you need to go to the end of your simple sequence and code the whole thing and
01:31:44
01:32:19
5504
5539
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5504s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
then send all your bits if you want to minimize latency not wait till the end of the whole thing before you can send anything at all you'll split into smaller blocks such tab he keeps traveling 0.5 it's something to say okay I'm done there's a bigger block I'm sending it across another thing down this scheme as I described it assumes this infinite precision it assumes that you can actually compute these intervals precisely and this interval becomes always small small over time and so you could imagine that you're starting under
01:32:19
01:32:49
5539
5569
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5539s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
flow if you just do standard floating-point calculations to compute those intervals and then of course you would start losing information because the floating point system couldn't like encode the information you need to encode there is a solution to that you can actually convert this all into a scheme where you're only computed with integers and the blow-up compression survey then I linked at the one of the very first slides explains how I can turn this into an integer implementation rather than relying on real numbers now that we know how to
01:32:49
01:33:23
5569
5603
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5569s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
encode let's think about how all the rest of models can play well in this so far we said we have P of a P of B but actually no need in this entire scheme that we described at the distribution used for P of X one has to be the same decision as we use for it P of X 2 for X 3 and X 4 we can instead use conditionals that are more precise and more predictive of the next symbol and some lower entropy and a more effective encoding scheme and so this arithmetic coding scheme is perfectly compatible in all aggressive models you can just work
01:33:23
01:33:59
5603
5639
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5603s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
your way let's say put a pixel after pixel or get the distribution for next pixel next pixel next pixel and encode with arithmetic encoding accordingly working your way through an image the better the log probability the further compression will be so better likelihood off your recive model will mean better compression of that data and these two schemes couldn't be any more compatible perfectly lined up predict one symbol at a time and encoded one at a time and keep going so let me pause you're gonna see our questions about arithmetic
01:33:59
01:34:37
5639
5677
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5639s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
coding and then we'll switch to a very different kind of coding scheme okay now I'll switch to think about how I can use a variational auto encoder something called bits back coding and a symmetric numeral systems to encode information this at least to me is one of the most was one of the most mind-boggling thinks how does this even possible they're confusing at first but I hope that you know - the way we laid it out in the slice at all again be clear how does exactly works but there's this notion somehow you get bits back and hence you
01:34:37
01:35:28
5677
5728
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5677s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
send bits but it's actually not as expensive you thought it was because he got bits back and we'll make it more precise soon the references for this part of lecture are listed here with the initial bits back paper by Freund Hinton from 97 actually that's the same one the first one is here which was on using it in the context of man description length for the way to know I'm at work but then start looking at source coding then there was this paper here the bits back ans paper so law to refer it do it that way bits back 10s was the session let me
01:35:28
01:36:10
5728
5770
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5728s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
let me restart the slide for a moment so first first thing that happened with bits back is the thing at the bottom here this wasn't a comics of pure vision learning was not in the comics of coding next thing that happened was in the comics of Ashley making this practical as a coding scheme this idea and but this used arithmetic coding it turns out that the scheme we're going to look at is not very compatible with arithmetic coding unlike autographs of models which are almost designed to do arithmetic coding the when you have a via it's not very
01:36:10
01:36:52
5770
5812
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5770s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
compatible with that not in the same way so this result into lots of overhead lots of extra bits you need to be communicated chunking has to happen a lot when you usually don't want to chunk as you lose efficiency this isn't 97 then in 2019 this beautiful paper came out by concentrated barber who's sure they can do this with NS rather than automatic coding so the underlying information theoretic scheme used in their approaches NS rather than arithmetic coding we haven't covered in us yet but higher-level thing is that I
01:36:52
01:37:30
5812
5850
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5812s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
think the coding looks at your data as a stream you go literally through it ans doesn't including that acts more like a stack and putting things popping things from stack is the way things get encoded and that matches much better with the ideas we're going to step through here and next is actually possibly practical in fact the NS used in many place but physically here very well matched with BAE type coding schemes then in our work Berkeley Jonathan let a lot of this work today with Freesat kima and myself we
01:37:30
01:38:03
5850
5883
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5850s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
looked at essentially this paper here and made it more made it more efficient in by looking at hierarchical latent variable random just single it in variable auto-encoders all of this builds on this ANS TIG invented by chaired Duda in 2007 which is using many coding schemes but is very just encase a lot of information here was invented in authorities in the 50s 1940s 1950s right Shannon's theorem 1948 Hoffman code 1952 ans third aqua coding scheme today invented in 2007 POW at a time where nobody thought you couldn't still invent
01:38:03
01:38:43
5883
5923
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5883s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
really groundbreaking new things that are going to be that widely used in compression and sure they did so quick refresher will covered encoding entropy coding assigns log to 1 over P of X I length for the encoding of a symbol entropy is a lower bound we know and how long do you go to even pass to be the Shannon theorem said that we can't do better than entropy Huffman says the maya with the Huffman scheme you can get to entropy plus one arithmetic coding always have verbal I think many symbols in one go plays a plus two but is not a
01:38:43
01:39:20
5923
5960
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5923s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
plus two or similar so plus do for the entire simple sequence so it's actually more efficient than doing Huffman each separate symbol so that's we've all read so far where some key assumptions it is assumed that we have a model P of X for which we can do the following track to be numerate all X for Huffman otherwise we can do it build that tree to enumerate everything well you want to enumerate all possible images you might want to possibly encode no you can't build a Huffman tree for that arithmetic coding gets around that by you only need
01:39:20
01:39:57
5960
5997
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5960s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
to be able to assign probabilities to the next symbol in your sequence if you can do that you can use arithmetic coding but even that tend to require that there's a relatively small number of symbol values if your symbol can take on infinitely many values it's not really clear how you doing arithmetic coding so when it is fire 11 1 X continuous but that's actually quite fixable we'll look at that on the next slide and then the X is high dimensional and that's the main challenge we'll be looking at the observation here is down
01:39:57
01:40:31
5997
6031
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5997s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
some high dimensional distributions still allow for convenient coding and what we'll see some examples and we will want to do is leverage that to efficiently code mixture models of these easy high dimensional distributions the key wooden that we'll get from this part of lectures down as long as the single the non mixture model can be employed efficiently Whittlesey scheme that from there allows us to encode data coming to the mixture model also very efficiently and of course mixture models are often a lot more expressive than
01:40:31
01:41:04
6031
6064
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6031s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
their individual components which means that we can now have a coding scheme that is designed around a much more expressive distribution class that you can fit to your data oh that seems to be out of order okay well so a real number X has infinite information so we cannot really expect to send a real number across a line in a finite amount of time because the infant that's keeps going forever new information every bit so what we're gonna do we're gonna assume if we have to be able to continuous variables X that we can discretize and
01:41:04
01:41:44
6064
6104
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6064s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
that we're happy with the discretization so this guy's up to some position T you can discretize in two ways imagine you have a Gaussian excellents on the horizontal axis is a Gaussian distribution you can discretize on the x axis or alternatively and often more convenient you can discretize in the cumulative distribution so it still acts the cumulative distribution will run something like this and then this goes from 0 to 1 you can discretize here first of all lets you deal with the notion a just relies on X what were you gonna do with
01:41:44
01:42:33
6104
6153
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6104s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the tails you probably make one again you make one big interval to go to infinity but still as somewhat inconvenient maybe also if you disregard an X well this thing has a lot of probability mass this one doesn't have much priority mask I see this guitar is based on the cumulative this is just saying every interval has the same probability mass that's how I'm going to disco ties to discretize me located here be here from there then here this interval you go here this interval then you go well it's not perfectly drawn but you could hear you
01:42:33
01:43:07
6153
6187
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6153s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
get the interval here isn't the ball here isn't the ball and so forth so that's that's a way you can discretize continuous variables with equal probability analysis or doing it tacitly on the x axis we can look at something called the discretized variable X this crisis interval with T and then so this is this version of it okay look at the entropy of that variable which will be probability of being in the interval I which is T which is the width times the height P of X I and so this approximation of an integral and then
01:43:07
01:43:49
6187
6229
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6187s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the value of the function here okay now when we work this out just log of the product is sum of the logs then we see here that this is looks like approximation of integrals so we say okay it's almost the same as integral and we get is here is what is called the differential entropy and then this is some factor that ties into the discretization level so it seems can actually use the differential entropy if we have a functional representation of our of our distribution and we can compute the integral for it we can
01:43:49
01:44:32
6229
6272
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6229s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
understand what the differential entropy is and then the log of our disposition level will determine the overall entropy that would go into representatives as a discrete variable okay so that's a bit of background on how to deal with entropy of continuous variables jump it will be determined by our discretization now let's go to the actual challenge is that we wanted to solve and it was and we'll mostly think about discrete variables now but it also works for contains you can choose so key assumption some high-dimensional
01:44:32
01:45:25
6272
6325
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6272s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
P of X that allows for easy coding exercise I mention all hearings sometimes still used to encode one when X is Gaussian you can do it as an independent random variables along each axis and each and invariably you could efficiently we've said on the previous slide or maybe for each X we can use an auto Russell model and we know how to do a rest of encoding with arithmetic schemes and so forth these are examples of high dimensional situations where we can encode things efficiently might be more but for now let's just let's go
01:45:25
01:46:03
6325
6363
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6325s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
mostly if think about this one over here mixture models allows to have them wider range of the situations and their components for example mixture of Gauss is much richer than a single Gaussian for example a single Gaussian all I can do is look like this but a niche model could have many of these bump mix together and then the overall thing would look something like that which is much more complexed representation that you can capture with this five component mixture that with a single gaussian the key question we want to answer is if P
01:46:03
01:46:43
6363
6403
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6363s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
of X is a mixture model of easily incredible distributions does that mean we can also efficiently encode P of X there we'll look at one the illustrations to get the point across the way that it's easy to draw on slides but keep in mind that we're covering a method that generalizes to higher dimensions and I've always want to do is do including of 1d variables you can use many many methods it's not about 1d that's just a way to draw things on the slide also will not allow ourselves to rely on that the main effects being small because the
01:46:43
01:47:23
6403
6443
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6403s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
main effects is being small we could rely on that we can do many other things so we imagined higher dimensional X take on many many values but somehow an efficiently incurring a single component and mixture that we're using to represent P of X ok let's see what we can do now our running example is going to be a mixture model P of X has a weighted sum this is choosing the mode there's different modes index by I there is a distribution of x given I so supposed to I think is down maybe when we sample X we first sample a mode and
01:47:23
01:48:05
6443
6485
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6443s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
once you sample the mode from sample mode one this is the distribution we sample mode to be this is a distribution we sample mode three maybe this is a distribution sample mode or maybe this is a distribution and so forth assumptions of each of these modes themselves easy to encode East to encode means that we have a scheme that will give us close to this because that's what a good encoding would do it would cost you a number of is equal to log 1 over P of x given I is that's one week more than oh it's mode I our
01:48:05
01:48:44
6485
6524
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6485s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
distribution is P of X given mine ok first scheme we might consider is max mode encoding so max mode encoding what do we do we say ok we have a mixture distribution in this method - code X well we don't know how to correctly from P of X but we know how to code from P of x given AI so what if we could get D on that was used to generate X then we can encode X there efficiently so we find an item maximize B of AI given X so imagine we're back to this mixture model thing our X falls here then we might sit home this mode over here is the one and this
01:48:44
01:49:34
6524
6574
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6524s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
is mug one two three and say okay mine is three that's the most likely mode to have generated this X but of course if we know how to encode x given I we still to send I across otherwise the other person cannot decode with that scheme because I don't know of what we're coding relative to so if the semi which will cost us log of one over P of I then we have to send X which will cost as log of one over P of x given I and so the expected third length shown on the right here is well there's an expectation for possible X's we need to send when we
01:49:34
01:50:12
6574
6612
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6574s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
send an X we look at the I that minimizes we're coming here is both I and X so we're really coding log 1 over P of I comma X willing to choose our I we get to choose and we're picking the one that minimizes that quantity another way to write a second equation here same thing okay so the schema straightforward and we know how much is going to cost us is it optimal it's not optimal because effectively we're using a different distribution Q of X which I'll have a cost H of X plus the kr between P and Q what do I mean with that when we use
01:50:12
01:50:56
6612
6656
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6612s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this encoding scheme imagine we have two modes this is P and you see P is running over here and both in those two modes when we use a scheme above effectively what we're doing is we're fitting the distribution queue to our to our original situation and we're encoding based on cue because editing the falls on this side will use mode one everything calls and that's how we'll use mode two and this is not the same as P it's different and will pay the price will pay the KL between the two in extra bits now I might say do we care do I
01:50:56
01:51:41
6656
6701
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6656s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
care about paying this KL divergence for all all the pants and this drawing here yeah you probably care it's a pretty big kale if your distribution was such that your modes are completely separated from each other well they're not KL between P and Q will be almost zero you might not care let's think about what we often care about in it our scenarios which is might have a variational auto encoder with a latent code latent variable Z so not just the I would M be Z that Z can take on a continuum of values so there'll be a continuum of modes and if
01:51:41
01:52:15
6701
6735
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6701s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
I pick only one of them instead of somehow using the continuum we're losing a lot because because it's a continuum they're all going to be very close together and so we are going to lose a lot by using Q instead of P in this situation so we have a scheme we can do coding but we're paying a price question is can we somehow get it done without paying that K odd well let's think about it some more so we looked at max mode what if we do posterior sampling in posterior sampling or say as well we still have the same situation as before
01:52:15
01:52:53
6735
6773
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6735s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
but the current is up to taking the I that remember before it was high that maximizes P i given X here we sample might not sound smart up first and in fact when we're done with this slide you'll see that the coding scheme we're covering on this slide is worse than the one we covered on the previous slide but in the process of covering this scheme we'll build up some new concepts that allow us on the next slide to get the best scheme better than the previous one this one so bear with me for a moment here so we sample I from P of Y given X
01:52:53
01:53:34
6773
6814
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6773s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we stand i same cost as before using encoding based on the prior like the I why not P I give an X you might say isn't that more peak can we just send P are given X well the recipient doesn't have X so they cannot decode it against a given X they have nothing else we send I is the first thing we say well they have to decode a dist on the prior and you have to encode it based enough then we send backs using same encoding scheme as before this is probably efficient but not necessarily as efficient as using the best I remember imagine we have
01:53:34
01:54:13
6814
6853
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6814s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
these distributions here and let's say our X landed over here let's say there's mode 1 mode 2 and we're unlucky and when we sample I from PRI given X we end up with our I equal to somehow well encoding X code X from P of x given I equal 2 is going to be very expensive because there's a low probability here that code is not going to be very efficient at getting X across so it makes it less efficient than what's on the previous slide in fact the difference is that here we have log 1 over P I comma X whereas in the previous
01:54:13
01:54:53
6853
6893
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6853s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
one we had a min over I sitting in front of it ok so we lost some things here but it's all for a good reason so now what we're going to now be able to do is earn bits back which is the key concept we want to get to so it's an optimal yes and no it's yes it's optimal if we like to send I and X but we don't care about sending I we just want to send X is something we made up X is the real thing is the symbol I is just a mode then you know we have an ordered distribution we're fitting so optimal the same both but it's a waste to send I and so how
01:54:53
01:55:41
6893
6941
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6893s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
much the bottle is well it's about more according to the entropy are are given acts effectively because that's where we send that's that's wasted so what can we do what can we do to avoid this overhead the very interesting idea that the base back idea is that somehow we we send too many bits what we can earn them back and so the higher level that's what's gonna happen I say all things gonna happen we say we acknowledge we said too much we're gonna somehow earn them back and not have to pay for them so let's take a
01:55:41
01:56:21
6941
6981
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6941s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
look at that legs back coding solves a scheme on the previous slide we sample I from PR given X the cost descendant is log of 1 over PI then we send X cost is log 1 over P of X given are all the same as in the previous line base back idea with exact difference between approximate inference later what will it do the recipient decodes I and X but knows the distribution for a given X because they have the corresponding model on their side so what that means is that there see piant actually can recover the random seed that you used to
01:56:21
01:57:08
6981
7028
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6981s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
sample i from p i given X they can go to the reverse profit you do the sampling here you use random seed what is it random seed that is really use sequence of random bits that was used since the recipient knows the distribution knows I know sex they can back out the sequence of random bits that caused you to sample I so can reconstruct the random bits use a sample PI given x those findings were also sent those are log 1 over PI given X random bits which we now don't have to count what do I mean with that imagine
01:57:08
01:57:56
7028
7076
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7028s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you're trying to send X and somehow we have a friend is also trying to send Brandon bits you can take your friends around the bits use them for this sampling send them across through this process and they'll be able to be decoded on the other side and those are your friends base so you don't have to pay the price for that that's their bits they happen to come out on the other side that's their cost to pay so one way to think of it all you have to pay is the X give an eye and that's it and we'll make that more concrete even if
01:57:56
01:58:29
7076
7109
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7076s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
they're your own bits so bits by coding cost you send you'll because of log 1 over P I to send I then cause of log 1 or P of X I just an x given I and then you earn this back because those bits actually it's a bunch of random bits that we're sitting there in or sent across but they're not yours you don't have to pay the price for them and if you do the math you actually have log 1 over P of X so you'll get encoding of our want to send X at the entry rate for X so we've got optimal encoding great we're optimal now what does it look like
01:58:29
01:59:10
7109
7150
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7109s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you have some symbol data this is what you want to send and that's an auxilary data this is random bits sequence the sender will do lossless compression the through the schema is fine receiver will get back out the symbol data and also get back out the auxilary data because you get them back out on this side you don't count it against your budget for encoding assumptions we make you we can compute P of Y given X which can be a strong assumption being able to find that distribution in your mixture model it's the distribution that you don't
01:59:10
01:59:51
7150
7191
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7150s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
break it up available and then if something you have auxilary random data we'd like to transmit and we set it across we don't have to pay a profit and somebody asked carries that cost so you've actually did something with approximate inference in a VA II we don't find the exact kathiria for Z given X we have an inference network or here Q I give an X in first network we sample hum q are given X otherwise everything is saying we go through the whole process what happens is what we get back is log 1 over Q I given X and
01:59:51
02:00:29
7191
7229
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7191s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
when we see here if then the cost of transmitting data is not a is a little higher than log 1 over P of X because effectively we have the wrong distribution here we have Q instead of P this is the evidence lower bound we applies with the BAE so if you use a via e to do bits back coding by optimizing the loss of the VA e there directly optimizing the compression capability of this big back coding approach so perfect match between the VA objective and compression so how about that source of random bits that we also like to send
02:00:29
02:01:11
7229
7271
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7229s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
where does that come from in practice it's actually your own bits so imagine you already have some bits sitting here you have some zeros ones you know maybe you've already done some compression of something else it's a random sequence it's sitting there ready to be transmitted then the first thing you have to do and the the notation here is slightly different Wyatt responds to our i M s corresponds to our X ok so keep that in mind that's the notation they use in this paper here from which we took the figure so in
02:01:11
02:01:48
7271
7308
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7271s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
cutting the mode why we do it with their infants attribution why given the simpler one code s0 to do that we need to grab random bits to do that sampling well that means we consumed these random bits from our string that we want to send across next thing that happens is we start encoding we code SEO remembers our X so our symbol given the mode gets encoded so this grows the number of bits you want us in then the encode de mode from its prior and this grows again and so what happened here is that in the process
02:01:48
02:02:34
7308
7354
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7308s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
when coding one symbol we have first consumed some bits now we're on the stack of things to be send now we've added more bits to encode asking them why it's a symbol given mode and added more bits to code the mode itself well overall this thing will have grown typically not guaranteed but typical half grown and now we could repeat this process what we had here as the extra information it's now sitting here you can get our next symbol s1 will find what our y1 is and repeat and so we see what actually happens is we were
02:02:34
02:03:14
7354
7394
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7354s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
building up and some kind of popping the stack by pushing through the stack the sequence of bits that it code a sequence of symbols with this mixture model or bits back coding so we really see is the bits that we're getting back or not visitor sitting off to the side necessarily they're bits that came onto our stack from encoding the previous symbol that we encoded this way and you might wonder well if we took it off here but put other things on have we lost the ability to get those bits back no that's the whole idea in the decoding as we saw
02:03:14
02:03:53
7394
7433
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7394s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
on the previous one two slides back sorry when we when we decode we have we can reconstruct the random bits that are used to sample the mole given the symbol and so we get them back out at that time so still get everything on the other side this is not lost will be decoded and bits back all right so the last thing I want to cover and I'm going to hand it off to Jonathan and maybe we'll take a very short break and then hand it off to Jonathan is how do we have to get those bits back I've been telling you you're
02:03:53
02:04:40
7433
7480
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7433s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
going to get these this back you're going to have sampled your mode from Q R given X and then later you're going to get them back how this work so let's say you have a distribution I have an X and I'm going to draw the distribution of Qi given X is going to be discrete for you I'm doing here it's gonna be discrete and so I'm going to look at the cumulative distribution so let's say I lives here and then I could be maybe one two three or four come the distribution we'll say okay maybe one has a probability of let's say 0.2 or
02:04:40
02:05:29
7480
7529
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7480s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
something then once I hit two maybe two is a probability of zero point one so we hit level zero point three over here and three might have a probability of maybe 0.5 to go away to zero point eight and then for weight of the probability of zero point two all the way to one what does it mean to sample i given X I have this bit stream so I have a bit stream sitting there I'm going to start from the end here and work my way so the first thing I see is zero zero tells me so I have a zero one interval here zero tells me that I am in the zero to zero
02:05:29
02:06:20
7529
7580
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7529s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
or interval but in that interval I can still be either so it has to be either one two or three I don't know yet what I'm going to be so simple the next zero at which point out in the zero to 0.25 and I still don't know what I'm going to be I've consumed I've consumed this era have consumed this zero now I'm going to consume this one now as I consume this one it means I'm gonna be in the 0.25 in the top half maybe here I still don't know what I'm going to be I could be a 1 or a 2 I don't know and I'm gonna have to
02:06:20
02:07:17
7580
7637
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7580s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
consume this zero then next now I'm in the bottom half of this and now I actually know once I sampled those four bits I know I won now I can go sample from P X given I equal 1 2 then encode my X right and I can also have my prior P I that I used to encode I equal hold on clear this for a moment so I need to stand X I need to send I how am I going to set I well you could say well I have a distribution here over for pop values and I couldn't either die by maybe building a Huffman code or something over those four possible values that's
02:07:17
02:08:20
7637
7700
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7637s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you can do something much simpler you can say to get the point across that I need to be I equal 1 well I achieved that by this sequence I achieved by the 0 1 0 0 sequence so I can actually just send across arrow 1 0 0 so not across and that signals that I have I so that way I'm also trivially getting those bits back because the person who receives this gets to read off the bits just like that oh here are the bits I can just read off and then I can all so use a temp to decode X all right so let's see I think that's it for me
02:08:20
02:09:07
7700
7747
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7700s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
let's take maybe a two three minute break as I know Jonathan has a lot to cover and let's maybe restart around 712 713 for the last part of lecture Jonathan do you want to try to take control of the screen here um yeah sure okay um let's see can you hear me okay yeah my I might turn off my camera too so that my internet connections were reliable but but we'll see just let me know if it's not working well okay um I guess I can just jump in and talk about more about bits back it's possible to address a question on chat oh yeah questions on
02:09:07
02:11:16
7747
7876
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7747s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
ChaCha I think it's part of lecture and then we'll dive in with with that later so first question is there's also we can have P of X and P is a mixture of gaussians how you can simply of X to begin with yeah it's a very observation it's not exactly our assumption the assumption more precise is that we have a mixture model and that four components individual components in a mixture model we know how to encode efficiently but for the mixture model as a whole we might not know how to encode and now we have a scheme to do that especially if
02:11:16
02:11:55
7876
7915
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7876s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you know how to encode each component let's back her to your way to encode it against a mixture model which likely there better fit your data distribution and as we know the closer you are to distribution the smaller K diverges the more efficient your coding will be and so it allows us to the mixture model which might be a better fit which in turn would result in higher efficiency encoding another question there is the new party between Edinburgh Nubia that's a really really good question so one of the big things that I think Jonathan
02:11:55
02:12:30
7915
7950
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7915s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
will be you know Jonathan's covering that paper so the 2019 paper tube it's like a NS paper by a thousand at all investigated exactly that assumption so we'll see bar that but the notion down if you already put bits on your bit stream from encoding the previous symbol previously previous symbol and you'd work with those bits is that really a deficient or real the question is all those bits really random enough to achieve the efficiency that we declare here and so chuckling we'll get to that question maybe five or six slides for
02:12:30
02:13:04
7950
7984
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7950s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
now so all that for now I know it should be clear a few slides no all right um right okay so I'll just talk about how one more about bits back and some more modern instantiations of bits back coding into real algorithms that we can actually download and use and also in particular how fits back coding place with new types of degenerative models like a ease and hierarchical PA ease and flows instead of say just Gaussian mixture models right so the the core algorithm that that all these mute bits back papers are based on is this thing
02:13:04
02:14:04
7984
8044
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=7984s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
called a symmetric numeral systems so this is an alternative to arithmetic coding as Peter was saying and it's especially appealing because it it's well first of all it's very simple and you can implement it on on you can implement it in a very efficient way which makes it actually practically usable and it also has some nice stack like properties that make it compatible with bits black coding so I'll just first take some time to describe what ENS actually is so again ans which is something like just like you're a thematic coding is a way of
02:14:04
02:14:44
8044
8084
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8044s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
taking a sequence of data and turning it into a bit stream where the bits greens length is something like the entropy of the data times the number of symbols and so I'll just jump right in and describe how this thing works and so so let's say this source that sorts of things that we're trending odhh is just two symbols it be each occurring with probability 1/2 and so you might imagine that the the naive way to code stuff like this is to just assign a to the number 0 and B to the number 1 and then you just get a string
02:14:44
02:15:29
8084
8129
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8084s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
of A's and B's just turns into a string of zeros and ones and that pretty much is the best that you can do but let's see how ans does this um so ans describes a bit stream not not represented it doesn't represent it exactly as a sequence of bits but it represents it as a natural number so there's an s stores this thing called a state s and we start at 0 and so ans defines an encoding operation so there's this encoding operation that takes in a current state and takes in the current symbol that you wish to encode so let's say you start at some
02:15:29
02:16:16
8129
8176
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8129s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
state s and you want to encode the the symbol a in this case in this very particular case what a in this will do is produce the number 2's 2 times X so remember the state s is a natural number and if you wish to encode B it produces the this state 2's plus 1 so that this is ans for for this very simple source um of course the ans will generalize more but just for in this case this is all it does and so you can see that really what this is doing is its appending numbers zeros and ones on the right of a binary representation of the
02:16:16
02:17:00
8176
8220
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8176s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
state s and that's how this is algorithm stores data that's how it stores a and B and a very important property of any reasonable coding algorithm like ans is that you should be able to decode the data that you encoded so given some state s you want to be able to tell what was the last symbol that was encoded and so that's very easy to check so if s is even then you know the last symbol was was a if it's odd then you know it's B and if you know it's then you can just divide by to take the floor and then you get the previous state so so that's how
02:17:00
02:17:48
8220
8268
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8220s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this algorithm works and you can already see just based on this very simple example that this algorithm has the stack like property if you encode a sequence of symbols a B V then the next thing that you decode if you wish will be the last thing that you encoded so it's sort of a first in last out type of type of stack ok can I ask a question here yeah so sorry for this simple example what is the capital P of X and the mixture of Gaussian can you explode right in terms of this example and also I don't see why the stack is being used
02:17:48
02:18:29
8268
8309
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8268s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
here thank you yes so in this case we haven't gotten to the mixture yet where we're gonna talk about that soon this is just for this very simple source over here it's just a coin flip but we just want to store coin flips there's no latent variables or anything like that the second question was where does the stack come in it comes in the fact that so let's say we but let's say we encode a sequence of symbols say B a B and so that's if we follow this encoding rule then that's gonna produce a sequence of states it's gonna be like s1 s2 s3 and
02:18:29
02:19:14
8309
8354
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8309s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
so s3 is the final state that we have after encoding these three symbols and then what ans lets us do is decode from that state and when we decode from that state and s will tell us the last symbol that was encoded and then tell us the the previous state that came before that so so that's why it's it's like a stack because if you ask me an s what was the lesson well that was encoded it's gonna be beat or it's gonna be the this beat not the first one hopefully this will be more clear as I've got some more examples
02:19:14
02:19:51
8354
8391
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8354s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
okay um right mmm it's not letting me advance [Music] okay so let's see how this generalizes to the setting of not just the binary source or not just the the coin flip but something more interesting so here we again have two symbols a and B but become the problem the probabilities aren't one-half anymore instead it's gonna be one-fourth for a and three-fourths for B so B is more likely so we're going to now think about how to generalize ans to this setting and and the way the way it's done is like this so you take all the natural numbers so
02:19:51
02:20:43
8391
8443
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8391s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
here here's all the natural numbers and what we do is we partition it into two sets one set for a and one set for B and so I'll just write down what those sets are and then and then talk about why we chose those sets so we're gonna write down one set for a and this is going to be 0 4 8 and so on and this is a partition so the the separate B is just all the other numbers so that's 1 2 3 5 6 7 1 so on so just a draw draw it out here these numbers here for an 8 correspond to 8 and these all the other numbers correspond to be I'm saying
02:20:43
02:21:35
8443
8495
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8443s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
correspond to a meeting correspond to ending in a or correspond to ending in be right um I guess I've been just I haven't defined what corresponds to mean yet I just mean that we're defining these two sets s a s sub a is gonna be all the numbers divisible by 4 and s would be is gonna be the others and so so we we've just defined these two sets and then I'll just describe how we encode some spring so let's say we want to encode the string be a B so again ENS builds up with some big natural number which is a state so we start at
02:21:35
02:22:20
8495
8540
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8495s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
the state start at s equals zero and what we want to do is encode onto the state zero the symbol B so the way we do this is we look for the zeroth number in B's set so that this might sound a little bit weird maybe I'll just write out the general rule if when we encode a state s with say the symbol a we look at the s number and press sobayed so this is s and this is this number and that's so B okay so let's just go through this so when we encode 0b we look for the zeroth number and be set so so be set is this one two
02:22:20
02:23:26
8540
8606
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8540s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
three five six seven all the numbers that are not divisible by four and a zeroth number starting indexing at zero is one that's the first number so that's what we get here so that's just writing it down in this table here okay now the next character we want any code is eight so we want to encode the new state as one and we want to include the number a or the symbol a so we look for number one in a set so a set is 0 for 8 and so on so number 1 is 4 so that was here and then finally the new status for and then
02:23:26
02:24:08
8606
8648
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8606s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we retreading could be again and so that's 6 so what this says is that ans has turned the string EAB to NSS turned the string BAE into this number 6 and this number six stores these three characters which is kind of cool okay so first of all the this might seem like a weird set of rules to play by but first let's check that this is actually decodable otherwise this would be useless so to see that is it possible to take the the number six and and see which was the last character that was encoded and the answer is yes because
02:24:08
02:24:50
8648
8690
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8648s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
these two sets si in SP were defined to partition the natural numbers so for any number any natural number like 6 you know which said it belongs to so you know that 6 belongs to s of B there's no and and so you know the last character that was encoded was B and then you can also recover the last state the previous state before he was encoded and the way you do that is just by looking at the position of six and S sub B a so you see that six is the fourth number and SOP so that's the previous day and you can just
02:24:50
02:25:32
8690
8732
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8690s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
keep repeating this and you can recover the characters that were encoded on so hopefully that convinces do that this is decodable and kind of the point of this is that we actually chose these sets as sub ans of B so that their density in the natural numbers is approximately well it is pretty much the probability of the symbols so you know if you take a lot of natural numbers the fraction of the numbers which Lyoness of a is about one-fourth and the fraction of the numbers that line s of B is about three reports and so this out this encoding
02:25:32
02:26:07
8732
8767
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8732s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
operation here where we look for the s number in one of these sets on that operation will advance us by a fraction but by a factor of about one over P but that's just what happens because this thing is distributed like a fraction of P over the natural numbers so when you index into it you you you increase by a fraction of one over P so that means that every time you encode a symbol onto on to a state I guess it's called X here you end up multiplying your natural number by about 1 over P that's generally what happens approximately so
02:26:07
02:26:51
8767
8811
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8767s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
here if the SLA is powers of three it'll also work powers of three yeah so we want as SMA so we we just want them to occur about one fourth of the time like zero comma three comma nine etc that'll also work um so that doesn't really occur one-fourth of the time in if you pick some long sequence of natural numbers those numbers don't occur one-fourth of the time for that long sequence oh I see so we want the density of these things to be to be any partition that needs the criteria that first 1/4 is going to work right this is
02:26:51
02:27:41
8811
8861
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8811s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
not a neat partition right so so there are actually a lot of choices for this so this particular choice is true is it so that it's very easy to implement the encoding and decoding operations so you can just do it with some modular um but if you have some crazy choice maybe it'll work but it might be very hard to to compute B encode and decode operations well it seems like the set of natural number that is also chosen like can be chosen otherwise here like we don't have to we only restrict a natural number because of the index is zero so
02:27:41
02:28:21
8861
8901
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8861s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
it's convenient is that why well at the end of the day this is something that we want to turn into a binary string so I guess I haven't described that at the end but so so once you in covered everything you have this big natural number that describes all your symbols and then you turn it into a binary string and then you can in the binary representation and you can ship that off to the to the receiver they start at the end you just have one number right right and then you from this one number you can back word
02:28:21
02:28:53
8901
8933
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8901s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
generate all the three views right right but here we have this number six and now we want to send six to the receiver and the receiver you know the all our communication protocols work in bits so we have to turn six into a binary string and then send that to the receiver but the point the point is that actually secure here's here's the property of this scheme that we basically keep dividing by P of s every time we encode s so that means that if we encode a bunch of symbols we get some starting symbol divided by the
02:28:53
02:29:34
8933
8974
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8933s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
product of the probabilities of all the symbols that we that we encode it and so if we so this is some natural number and if we code the natural number the number of bits needed is about the log of the number that's the log base 2 of the number that's how many bits we need to code it so we see that this is this is the code length it's the sum over T of log 1 over P for all the symbols and so so if we take this and we divide by the number of symbols so if we take this divided by the number of symbols you see that this goes to the entropy of this of
02:29:34
02:30:21
8974
9021
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=8974s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
this source so that so this is like an optimal thing to do I roundabout way of answering the question of why use natural numbers but but I think the stack here is just a conceptual framework right we don't know the actual implementation we don't need stack yeah that's absolutely true with this we say it's a stack just because it has this property that every time we decode something it we just get the last thing that was encoded we don't get the first thing that I was encoded so we just call it a stack but but yeah you
02:30:21
02:30:55
9021
9055
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9021s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
don't actually need a real stack I mean essentially it's just a partition of a lookup table like before we have a general lookup table but now you just partitioning the lookup table um sure right I guess like maybe the point here is that yeah ENS is really these rules and you can implement them efficiently this was what judo found and and it seems to work in practice and it kind of helps this stack like behavior that's the best the point of this and it's also optimal okay so what so returning to more interesting models which are not just
02:30:55
02:31:48
9055
9108
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9055s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
two characters a and B but rather things like distributions over images represented by latent variable models so there's this very nice algorithm introduced in 2019 called bits back with ans or BB ans which is its back coding using ans as a back end and the the reason to use ans is because it turns out that the the staff like property of ans where the last thing you decode or the wear whatever you decode is the last in Unicode makes it very compatible with the concept of getting bits back so let's just see how that works so here
02:31:48
02:32:30
9108
9150
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9108s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
we're gonna think about winged variable models so Peter talked about Gaussian mixture models which are one case of this so here's the is the latent variable P of Z is the prior and that's the marginal distribution so this is how bits back coding works and we're gonna talk about how it works exactly what AMS so in BBA in s the first thing you do if you wish to send X so the goal here is the send X the first thing you do is you you you start off with a non-empty bit screen so you start with so s so we can just call it a bit scream
02:32:30
02:33:26
9150
9206
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9150s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
because that that's just how we think about it and so the first thing the encoder does is it decodes Z from the bit stream so the encoder knows X so the encoder can compute Q of Z given X this is just the approximate posterior of this latent variable model and it can use this distribution to decode from the bit stream and we assume that this bit stream was full of random bits and so that this is a question that came up and I'll talk about the consequences of that later so that's the first thing you do but the point is that if you decode from
02:33:26
02:34:07
9206
9247
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9206s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
random bits then you get a sample and then the next thing the encoder does is it encodes X using this using P of X given Z which is actually called the decoder and then it finally encodes e ok so what actually happened here so if we just visualize this in a spit stream like this so so that this is what we started off with in the first phase when we decode Z we actually remove a little bit of this bit stream from the right so imagine this is a stack where we keep adding things on the right so in this first phase we remove a little bit and
02:34:07
02:34:52
9247
9292
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9247s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
then we get a little shorter bit stream then we encode X so that increases the length of the bit stream a little bit more but let's say by this much then then we encode Z again so so that that increases by a little bit so now we can you can just look at this diagram and see how much how long did this bit stream get what was the net change in the length of this bit screen well we we have to add in these two parts right because the bit stream went right there but but then we also subtract how much well I guess not there but we subtracted
02:34:52
02:35:39
9292
9339
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9292s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
a little bit at the beginning so the netcode links though are the net amount of change to this to the length of this bit stream is well that's a negative log P of X given Z minus log P of Z so that was furred for these two parts two and three but then we had to subtract the amount that we we decoded from the bit stream at the beginning I was so that's plus log excuse me for the next and the first part Z gives you some sample from Q so the actual code length on average is is the average of this of received Ron from the approximate posterior so
02:35:39
02:36:25
9339
9385
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9339s
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
you can see that this is the v AE bound this is just the variational bound on negative log likelihood so so I guess this is can I ask sorry if you have a stream of let's say oh just lowercase letters A through Z then would P of Z here just be 1 over 26 and then the P of X given Z would be the number of times it occurs in divided by the total length right so it just depends on what your latent variable model happens to be I'm so the case that I'm actually thinking about it is view is that this is a V a and so P of Z is like standard normal
02:36:25
02:37:17
9385
9437
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9385s
https://i.ytimg.com/vi/p…axresdefault.jpg