video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
PXOhi6m09bA
be the same as original marginal distribution as a and oftentimes in literature this conditional distribution is just a deterministic mapping so I would say give me any sample from a I would map it to its corresponding sample in domain B so far so good so the question is basically we have stated the question to be in the end we are trying to match this marginal of be approximate
4,820
4,884
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4820s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
B to the ground truth B which we have access to but in nopon you are saying the new net Q needs to look at a essentially so that is a true statement so if my Q if my Q is so powerful that it could just represent the whole marginal distribution of B so let's say let's call if we have Q of P given a that is equal to Q of B for all a and B pair then your statement would be true
4,884
4,913
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4884s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
basically you can just like approximate the marginal distribution without even doing any meaningful work so that's why like in practice people would have a fairly restrictive mapping so like that's why like in most of the works that we would look at like it usually takes the form of a deterministic mapping so when when Q of B given a is deterministic then like you don't you don't get to
4,913
4,937
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4913s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
represent the whole marginal distribution unless P of P itself is only a pawn mass but that is a correct observation but like you said like I suggested like I mean this is a very weak learning signal like if you don't correct you if you don't construct your model in the right way like you you could extract nothing from it so let's see some examples of like how like how
4,937
4,960
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4937s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
does this how does this in an ideal case work at all right so let's say I have two distribution a and B and they are very simple categorical distributions that only have three possible values a 1 a 2 a 3 I'm just going to draw some frequency here right so and then I'm going to do the same thing for B so this is probability mass function all right so so based on let's say we
4,960
5,012
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4960s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
have a deterministic mapping that means like each of the a1 has each of the a has to map to some B and each of the B has to map to some a then like based on the marginal meshing like this can seemingly gives us a way to recover the crunch of Correspondence let's say the ground shook of respondents is between a 2 and B 1 and a 3 and B 2 and a 1 and B 3 right so I would argue this would be
5,012
5,051
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5012s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the only correspondence that satisfied the marginal matching constraint well it could be not a by direction but then like it might not fulfill the properties until unless some of the unless some of the values have like no probability mass then like you can do whatever right so argue this is the only way that you could make the marginal meshing look the reason is that because each value has a
5,051
5,086
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5051s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
distinct frequency here so if you match it in the wrong way the marginal distribution of the induced mapping would no longer be measuring the original one but there are still a lot of ambiguity right so if we imagine a distributions that's like the most kind of the most difficult one let's say I have a uniform distributions over two random variables then this is kind of
5,086
5,120
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5086s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
hopeless because all kind of mapping could work so let's first look at the a to be mapping so a one can map to B 1 a 2 map to beat you but then it's also possible that a 1 can be mapped to beat you a to map to b1 and will still be fine form a marginal matching perspective but then the problem is so well then that means this thing is ambiguous and this is not just hopes
5,120
5,160
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5120s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
sorry mainly joyed the other way so there are two set of mappings that we are we need to learn here one is G a B which is mapping from A to B and then another set of things is another set of things that we need to learn is GPA and basically you would need to learn the product of these two different possibilities B 1 2 A 1 B 2 2 a 2 B so how many like totally possible
5,160
5,213
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5160s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
solutions are there to this problem if we just use marginal matching yeah so basically each direction there are two possibilities and then if you multiply them together there are four possible solutions in this problem and they are basically totally ambiguous so this is one of the thing that we are seeing here is you can have your objective function that induce
5,213
5,239
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5213s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
a really large solution set really in this case is almost all of the solution set and then people realized this can potentially be a problem so they introduced another technique to try to at least restrict the solution set a little bit so this thing is oftentimes referred to as cycle consistency but it has also taken a lot of other names in literature called dual in learning back
5,239
5,271
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5239s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
translations and really the core idea is that if I if I take my so basically the whole idea is that my apartment mapping should be similar to the ground truth mapping and if those mappings are deterministic then what that means is that if I step through my mapping I should get back my original sample so if we think about the case of P of a be given a given of a would be this would
5,271
5,305
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5271s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
map aid to his correspondence in B and then if you apply that again from the other direction B to a mapping you should get back a and this should hold you in both directions so this gives you another invariance so if you say that the relationship between these two distributions are indeed deterministic then I would say these things should hold you for all possible
5,305
5,332
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5305s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
pairs and if we step back to you the example that we just look at so if we impose psycho consistent see what would be the number of possible solutions now why is that no longer four yeah so like we can see that in this case the original total solution set is four but after you impose this cycle consistency constraint you can reduce the solution set to exist like some of
5,332
5,368
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5332s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the some of the mapping are no longer valid so let's say if I pick this and then take this this would be no longer valid because an a.1 would get translated into B 1 and then P 1 according to this would get translated into be a 2 so this is G a B this is G B a and this is this no longer satisfy the cycle consistency constraint so that means like I can use this constraint to
5,368
5,398
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5368s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
reduce the possible solution set in my in my search but still like we can see that it's still fundamentally under defined like we are still left with two possible mappings and we are not sure which one is correct but it at least exponentially shrink the space that is possible so so far we have seen the two core in variances that people have used and these are invariants that's true for
5,398
5,430
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5398s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
all alignment problems and then we can use them as learning signals and again like obviously like we just look at like even in a extremely low dimension basically categorical examples like it's still not going to work so there are there are definitely problems that this cannot solve but then in practice like people can find problems that this kind of search is amenable to you and then
5,430
5,458
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5430s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you can oftentimes ensure that there's no biases in your system by selecting the right architectures loss functions and etc and you can actually get to a certain level of success with this yeah so they expect this this one basically says for an arbitrary data porn in its it's just a generalized version of the cycle consistency thing so for any of the data porn a if I draw samples for my
5,458
5,502
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5458s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
approximate conditional distribution and then I translate that B back to a using my approximate once the the distribution that induce should be similar to if you do it with the real-world one and what we're saying here is just that like when in the case of both of when we say we know the P and the Q are both deterministic then it reduces to the deterministic mapping example but it
5,502
5,537
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5502s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
could exist in a more general form so probably the best-known example that uses those learning signals are cycle again so psycho gains laws essentially consists of two parts one part is this marginal matching so meaning after I translate my data from one domain to another the marginal of them still match with each other and you can kind of see it here so essentially my generator no
5,537
5,574
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5537s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
longer takes a mapping from Z to X instead this thing is trying to translate from X to Y and I want to do it in such a way that it looks like my target image so this is just a standard gain training loss where you're trying to say my mapping from X to Y it should look like just like Y so that's fairly straightforward and so but it's actually instead of looking at frequency you use
5,574
5,607
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5574s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
again to help you do the marginal matching and the second dimension of this is you can achieve cycle consistency by an l1 loss essentially what there's if we unpack this up this objective function is your sample for data in one of your domain probably your source domain and then you map it to you map it to a cycle then it should look like itself in an l1 sense
5,607
5,635
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5607s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
so this is I think what they call forward cycle consistency because it's going from X to Y to X and then they also have a backward one where you essentially kind of think of a sample from oil labels and then you map it through this thing again Y dou X dou Y should be similar to yourself in an l1 sense so that's the lost function and then you essentially would combine these
5,635
5,671
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5635s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
two things together and then train it I think in practice they use at least I think the in practice they use at least square again instead of the original gain objective but probably it doesn't make too much difference so they reported a couple numerical results and the first results that they look at is so in the case of going from photo to semantic mask you can actually calculate
5,671
5,698
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5671s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the accuracy so we can get a quantifiable notion of how well the method is doing so the method that we just introduced is called cycle again and it's basically unsupervised so you give it a bunch of distressing images and then a bunch of semantic masks and then you hope them that they somehow align each other and what this shows that they actually do pretty well so a
5,698
5,725
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5698s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
peg to peg is a fully supervised model so the last rode means you get you actually get pairs of image and their corresponding label so the last rows should be read as basically an upper bound on the performance and then the cycle again by using no labels at all you can actually do pretty fully so like you can roughly say that 60% of the pixels are labeled correctly
5,725
5,756
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5725s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
with the right with the right class we actually know even with no information on how they should be related to each other know so there's there's no pairs so you you train you train the whole system with just a bunch of unordered images and then a bunch of an order masks and then it's learning to align them good good question I don't think they have that level of analysis but I
5,756
5,796
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5756s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
think that would be interesting to see like what are the kind of things that are easier for you to align what are the things that are not like a mushrik yeah yeah yeah yeah so the so the question is there was a lot of inductive bias going from one image to another using a conf net and then also using a certain discriminator that operates on a patch basis so you kind of like do a kind of
5,796
5,829
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5796s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
domain alignment patch like so you can think of it as having its there's really a lot of training signal that is not captured by the loss function at all so unfortunately we don't know the answer to that so I guess what what I know for sure is like if you just scramble the image like like I mean just permute the dimensions in your image tenser then I'm pretty sure you would do full but then
5,829
5,855
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5829s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
like is does that mean like this is no useful probably not but this still means that we don't fully understand what are the inductive biases they're helping us but that's a good question yes like right so I guess so so the common is around like the a lot of these translation problems like operate in a very local manner like you're kind of like saying I just need to change my
5,855
5,898
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5855s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
local pixels like when you go from zero to you to two holes like you're just kind of changing a local texture as opposed to something that is global which is presumably much harder I think that's likely the case I have no idea you cannot you can ask Alyosha who will be here soon so I I think this is a long way from supervised learning so I believe supervised learning like
5,898
5,932
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5898s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
this thing should like I don't know but I I imagine this to be at least 95 percent plus but again like this is not a correct comparison I think the correct comparison is to compare cycle again with takes two pics because they use similar architecture except one is supervised the other is unsupervised all right so they have some Appalachians in terms of loss function or don't know
5,932
5,963
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5932s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
in terms of architecture the Appalachians basically tell you what we sort of expect that like for one like if you can alone if you just you can along this means you just do marginal matching so which is actually not bad already and then you can see that if you add psycho consistency in there it helps you and there's something that's really puzzling like I'm really confused by what is
5,963
5,996
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5963s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
happening here it just kills everything I like I have no idea what is happening there so I don't know and and what's also interesting is that it doesn't always help so what we're looking at we're looking at going from photos to labels and then they have another experiment that is going from labels to photos so this is a much higher entropy mapping whereas they still use just
5,996
6,027
https://www.youtube.com/watch?v=PXOhi6m09bA&t=5996s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
deterministic mapping so you can imagine some there might be something that is playing with that in here that I guess we don't we don't fully understand what's interesting here though is the evaluation map metric is pretty interesting so remember here we are evaluating it from label to photos so basically is give you a semantic mask how well you can generate the scene but
6,027
6,056
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6027s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
then how do you even evaluate that so they actually have a pretty clever way of evaluating that so what they would do is they would run another pre-trained semantic segmentation now walk fully convolutional network they will run it on the generated image and then they use step to quantify the results so that is a pretty interesting trick to evaluate this mapping kind of like the inception
6,056
6,083
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6056s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
school except like in this case like we you kind of yeah it's kind of like Inception school but I think it's better than inception in school in this restricted domain these are some of the other codes the first cases where you translate from I guess a schematic annotation of a facade facade going from address to Shu going from shoes to address most of them make sense they
6,083
6,115
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6083s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
applied this to a wide variety of different problems where you it's just impossible to get label pair it's just like I guess somewhere Yosemite and winter with somebody you can get pears although not exactly the same and translating apples to oranges like just like we said like this is it's it's not like this is not supervised and it's not fully unique they have their set of
6,115
6,149
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6115s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
failures and in this one what the authors explain the paper is they're like I mean when you train on how movies are success I don't know when you train on horses in imagenet they decide like they're like they would they have not seen a human riding on it and as such like you would just classify or similar texture to be horses and then you just translate that so like this this I guess
6,149
6,178
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6149s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
goes back to one of one of the question that someone mentioned it's like what are the failure cases so like I think this is one good example of like what it fails on I think this is a good example of what the model is doing is it's trying to find like yellowish pattern and then change that yellowish pattern to stripes so that's apparently what the model is doing so that's that for cycle
6,178
6,203
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6178s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
gain so essentially it's pretty surprising that it can work on certain domains and when it works I think it's very reasonable so the next thing that we would look at is we look at improving cycle gain in certain dimensions so the crucial dimension that we would look at here is that remember when we talked about the cycle again the cycle again has this deterministic mapping so
6,203
6,234
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6203s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you give you an image X which translated into domain Y and but this translation is deterministic but that is fundamentally not correct at least for a lot of the alignment problems that we care about so let's say if I want you go from mask to image semantic mask to image like there was a lot of different ways to satisfy the same semantic mask there are a lot of different ways to
6,234
6,263
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6234s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
generate that image like semantic mask only tells you there's a car here but what does the car look like what's inside the car what color it is like it's simply specifying none of those so basically there's this high entropy mapping going from semantic mask to image and that is apparently not deterministic so you can say oh one straightforward way to extend cycle
6,263
6,287
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6263s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
again is to say cycle gain is essentially this right so you take in an image a and then you're trying to map it to image B so one straightforward way to extend that would be to make this mapping taking an additional noise sauce just like in it typical again so you could take in an image a and then you can also take in a noise sauce that probably described like what does the
6,287
6,313
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6287s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
car look like what is the color other than the contour everything other than the contour and then from that noise source you can map to some image P and then if you sample different Z's hopefully you get different cars is that some motivation make sense so that's all good so and in fact like this has been done concurrent you cycle again there's another paper called do you again where
6,313
6,339
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6313s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
this is essentially the architecture the mapping would take in both an image from source domain as well as a random noise source however this is not enough to just change your architecture because if you changed if you even if you change your architecture the noise are doomed to be ignored and to see that the reason is essentially our lost function the l1 law the l1 cycle
6,339
6,368
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6339s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
consistence loss carry to do the following so if my map my a with certain Z this this produce like some kind of B for me and then if I met my be with another Z Prime I should get back to a and we can see that in this whole mapping the choice of z and z prime are essentially ignored so you can choose different Z or Z Prime you still need to satisfy this mapping
6,368
6,397
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6368s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
so what that means is that the noise source is necessarily ignored when you impose a cycle consistency loss and when you optimize it to is fixed port so that's not good and then there was this augment excite Oakham paper that proposed a way to solve it so you would augment the noise to your architecture but you will also learn so instead of only learning the mapping from A to B
6,397
6,431
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6397s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and B to a you will learn an encoder of each of the noise source not the similar to how an encoder is used in a variational method and it's actually pretty interesting so the way that you would go is I have some ground Shu image a and I have then what I'm going to say is that my ground truth image a would comes from a corresponding B and it's corresponding noise sauce za
6,431
6,466
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6431s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and then I would have this blue arrow which is a network that infer what's the a it is so basically I'm trying to infer what Zee produce my a and I'm going to infer what P produce my a so instead of only inferring what is my corresponding B I infer that as well as what is the noise source that produced me now with both the noise source and the corresponding B I can use that to map it
6,466
6,499
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6466s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
to an a prime using this color which is the mapping coming back from B to a and in the end I can say that a and a prime should be similar in l1 in l1 lost sense so now it's okay because I'm choosing a specific Z for each particular data point so if we think about it from an information theoretic sense whatever information that is not captured in B you push it in to Z that allows you to
6,499
6,531
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6499s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
perfectly reconstruct the original image as well as maintaining the ability to have diversity of mappings because a different egg come from different see yes so the question is how do you prevent the model from putting everything into Z so you could but in my fail the marginal matching criteria right so I guess the statement is like the Amb relationship could could become
6,531
6,572
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6531s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
decoupled right like from a I would just match an arbitrary B that actually has no corresponding with a from the beginning but then like remember that is always the problem like even with the original cycle again you could still produce an arbitrary mapping that this consistent but it's not the ground truth mapping so this I guess what I'm saying is this doesn't make it worse yeah
6,572
6,608
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6572s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
so we can say it again so I was saying like basically you can play this multiple steps and then like the evolution of them should still match the original marginal distribution yeah so many ways that you can play with this [Music] yeah you have Ken loss on B and then you would I think you also have Ken loss on Z which like Z you restricted to piece the marginal of that you restrict it to
6,608
6,676
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6608s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
be some Gaussian or something so in a sense you cannot put infinite information in there so both of them are in a sense information regular lies it's it's really more like an adversarial autoencoder which we didn't cover in a lecture so it's kind of like a VA yi but instead of like a care loss you use again loss so it is more it is but like a it is very much like a Nathan Coe
6,676
6,705
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6676s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
model that is training with again it could it wouldn't like that applies to everything that we would go over today there are holes in all of them oh you mean why they don't use a Vee I think it's probably the mapping from A to B that he wouldn't do well like you wouldn't do it in like a visually appealing way otherwise I think for Z they could actually use a VAD type of
6,705
6,746
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6705s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
loss but it would not be a VAD because there's still this thing that is [Laughter] yes no it's not it's not that fair comparison I would say though like if your data set is small like Dan training is usually pretty fast as well anyways but that's off topic so just like operationally what does that mean if we go through like one cycle of that well cycle consistency loss so we get some
6,746
6,820
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6746s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
image from some from a source domain and then we would randomly sample AZ for my mapping to be because remember that the mapping from A to B is also stochastic so B could take up a lot of different forms and I'm going to generate a noise source that dictate what it is so this is what I'm going to samples and then there will be a set of mappings that go through so the mapping from A to B now
6,820
6,850
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6820s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
it takes a and annoy sauce for B that gives me B and then from this BN a I can try to guess what was the the Z that has generated my original a so that is my encoder to guess Z of a and then finally I would plug in the B they're generated and the Z they are generated and from these two I get back I get back this a prime which supposedly should be close to my original sample so that's all good
6,850
6,888
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6850s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
like it's fairly easy to implement like just a small surgery on cycle again how a does it you so the full the first thing that you would want to run is essentially you would want to first give Z as the additional input to the mapping which they call stochastic cycle again I guess so that is without changing the loss function like without introducing the encoder it's what this column that we
6,888
6,918
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6888s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
are looking at and and then the test here is we simple and add app for my Gwangju of data here and then I will fit it through my cycle again but with different Z terms so this is imagine this is coming from Z 1 Z 2 Z 3 Z 4 so this is surprising right because we ocean aliy we went through this argument of like how just changing cycle games to make you to have to take in Z
6,918
6,953
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6918s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
wouldn't actually make use of Z like it would just ignore it but what we are seeing here is actually different right so you give it an H mask and it actually generate diverse samples for you so that's interesting like if we look at like this shoe like apparently there are all different colors and even though we do the Augmented one like using the new loss function and the encoder I would
6,953
6,980
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6953s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
say they look probably about the same the same kind of diversity but it is kind of interesting like why why does why does cycle again walk especially like this is highly contrasting with the analysis that we just went through so if I especially if I take a black shoe and then map it to a semantic mask and then map it back then if I get a Y shoe which is a point to
6,980
7,006
https://www.youtube.com/watch?v=PXOhi6m09bA&t=6980s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
here is something that could happen if I get a y shoes back I'm going to incur huge l1 loss because black and white are just to end of the spectrum in your color space so so that is somewhat puzzling like what what is this actually doing like if if it can generate this diverse samples then that means it's not optimizing its cycle gain loss well but it is optimizing its cycle gain loss as
7,006
7,030
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7006s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
well so the very interesting thing here is that cycle again when you go from a high dimension like a high entropy mapping like RGB images to a low entropy one like a semantic mask it can actually hide information in some kind of high frequency pattern so this is what we are this is what we are seeing here so like like going back to the black shoe exam like when you map from a black shoe to
7,030
7,062
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7030s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
his H pattern like it would give you like seem a plot seemingly plausible H patterns and then as some high-frequency noise to it that they know that this is coming from a black shoes and then you can look at the rough shape of that pattern and then it also reached the high frequency noise that's encoded in there to say oh this should be black and that's how it manages to still do the
7,062
7,087
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7062s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
cycle consistency right so I consistently get the same color back by hiding imperceptible information in some of my mask on my edge and that's pretty interesting and the way that you can show that it's doing that is by essentially constructing an experiment where you so so this is quite domain a this is domain B you get the B out and then you try to sample different Z that
7,087
7,123
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7087s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you try to fix that B and then you try to sample different Z's so if this is coming from a cycle gain loss then you will see that the mask itself even though seemingly it doesn't in cold color information it's implicitly encoding color information so if I take this mask that's coming from my model I will have color information hidden in it in such a way that when I sample
7,123
7,147
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7123s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
different Z's you always get the same output and that's how is able to still satisfy the cycle consistency constraint and so and then if you do the Augmented cycle gain loss what you will see is that the mask looked basically the same but there are seemingly less information in there so when you sample different random noise or Z you actually get different color of shoes back
7,147
7,182
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7147s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
I don't remember that's a that's good to check but like I like I guess like us as we have discussed like if Z is very powerful and potentially getting Co too much so I think I think there would be a balance there and this is kind of doing the cycle walk this is somewhat interesting some are similar to what we had discussed so maybe from A to B and then B to a a to be while I cycle
7,182
7,212
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7182s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
through different noise sauce and then if you do this kind of random walk in an auto augmented cycle again you can see that even though the mask stays relatively the same the over appearances some of the other color texture does change over time whereas if you train it with the original cycle getting lost you will just get the same pairs repeated again and again that's a
7,212
7,238
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7212s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
somewhat interesting a in my opinion relatively simple and arrogant extension to cycle gain that helped you to deal with stochastic mappings any questions on that before we move on so the next set of questions are can we do better so so far we have covered two learning principles one is marginal matching and then the other is psycho consistency and I guess it's a good
7,238
7,275
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7238s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
question that whether those are the all of the invariances that we can rely on or are there additional learning signals that we can derive from it it's a good open problem and if we step back and think about this whole problem it's really aligned to distributions we found knowing what's inside was really difficult if we if we think about the categorical distribution that has even
7,275
7,304
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7275s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
probabilities like it's just impossible to align them because we treat them as pure black paths like all values are the same as each other so one idea that we can move forward from this point is we can look inside a random variable it's we can say this image it's not just a huge a high dimensional random variable to me like I can actually look inside and see what's
7,304
7,330
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7304s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
in there and then maybe use that to help us and for this kind of high dimensional a and B like they typically have certain structures in them that that could be leveraged and as people have pointed out like when you use a continent and patch phase discriminator in a cycle game they're kind of implicitly employing some of this inductive bias already but I think we there are cases where we can
7,330
7,358
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7330s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
push this even further so the best example that I could find is in NLP for that so let's say ptomaine a is all English sentences and domain B or French sentences then we can imagine that like I can get a random sentence from or English sentences and a random sentence from all French sentences they might have the same empirical frequency but they might be totally semantically
7,358
7,388
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7358s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
unrelated which is likely to happen so it's like what consistency wooden also rule out that either this is just basically going back to the problems of when you have distributions to have uniform densities nothing could help but what we do know is that each sentence is made up of words and it's very unlikely that in those two totally semantically unrelated sentences they would have words that
7,388
7,421
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7388s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
have same kind of statistics so I'm using the term statistics loosely here I'm going to say more about like what we can do with this so basically what the exercise that we have gone through is instead of thinking of it as distribution alignment between sentences in different languages if we are allowed to look inside like what's in between each random variable and look at their
7,421
7,447
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7421s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
sub components and do some influence on the sub components they can help us circumvent the problem of long enough learning signal so in the case of NLP the sub components are the words and the large and the larger higher dimensional random variables are sentences or paragraphs and so that's interesting so like now what we can do is we can for one we can first of all align the words
7,447
7,471
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7447s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
like we can think of ways that you can do distribution alignment on words and even more we can think of we can make use of how different words occur together so let's say the word eye is most likely to be followed by M and because these two things Co occur most frequently in within this large random variable sent a sentence so what is the thing that we can make use of this kind
7,471
7,501
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7471s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
of co-occurrence to the six of sub components well we have learned one of them what you back in two lectures ago probably so just a recap on sip sip gram water back it's really simple so basically always trying to do is is trying to say given one was the world in a sentence I'm going to say that others other words in this sentence is more likely to occur than every other
7,501
7,530
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7501s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
things in my corpus and in practice you wouldn't sum over all of your dictionary you would do some negative sampling to optimize this but essentially the end we saw this certain vector that described if two vectors are close together in that vector spaced and they're more likely to occur in a sentence and if you train a very very large model on a lot of text data then they capture
7,530
7,553
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7530s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
how different words are likely to occur together so let's give Graham and what's really interesting is that this kind of woodwork method exhibits really interesting vector calculus so this is again a recap slice what we can look at is that if we look at the direction from a country to his capital the vector is actually relatively similar across a lot of these different pairs and we might we
7,553
7,590
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7553s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
might be able to ask like so based on this kind of vector if the vector calculus makes sense then does it mean that we can say the vector representation of those words are distributed in a certain manner and more importantly if similar cap vector calculus holds true for all languages meaning I train a word embedding for English let's say on the on the left and
7,590
7,618
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7590s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
they also training for embedding for Italian if they exhibit the same vector calculus meaning all different in bearings in them placed together in a such way that you could do the kind of country to capital translation then that's a really strong inductive bias for us and if that holds true then we can possibly align words by similarly by just like uncovering some kind of affine
7,618
7,648
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7618s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
or linear transformation that align these two things together so very surprisingly it's actually true so for the virtual vac that we use let's say in fast text if you train it on one language if your trailer multiple languages and then these embedding space they are only a rotational weight so you're going to learn a rotation matrix that rotate another language into your space and
7,648
7,680
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7648s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
then the results would be both this this is a graphics that grab from a Facebook blog post that Illustrated really well so you basically get this embedding space that that exhibit similar relative structure and then so what you can do but the absolute location is undefined so what you can do is you can just learn a way to align them together and then after the rotation one point in the
7,680
7,707
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7680s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
embedding space would be very likely to exhibit the same word but in different languages so this this totally blew my mind that this could work yeah so so initially so it was with the citations here you can basically use a small dictionary play a language to language dictionary you can use a small dictionary to learn the alignment so this it was still supervised but the
7,707
7,737
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7707s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
search space is much smaller like instead of going from each word map through a new net and then after another word you would be like every word is already presented by some embedding now I'm only learning the rotation that is used across all embeddings but then like so basically a couple data points is enough to to specify that yes I don't know probably two mm I would I would guess
7,737
7,769
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7737s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
like I've been totally uneducated guess I like I'm not gonna NLP cousin it could be pretty big yeah so that's pretty interesting so that's what happened up until like fari before 2017 ish is people can like you can align these two embedding spy is basically examples like I can just go to you like French English dictionary and then look up a couple Wars and then you
7,769
7,794
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7769s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
use that to to align these two embeddings that's really cool then you can do that yes what apparently they are scaled the same or less similar enough [Laughter] this recent paper that proposed a way that you can oh actually another thing that I forgot to mention is so no actually this is it so basically that's a supervised way to align this wording batting like so that's that's really
7,794
7,843
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7794s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
interesting that you can just capture that by a simple rotation well probably not simple but a rotation and what this work has done is to show that you can actually do that with really good performance in an unsupervised way so now I have two embedding space and then you are lying them without any training signal so the way that is done is actually basically just using the
7,843
7,874
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7843s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
principle of marginal matching so you would similarly apply so basically you your rotation each of each possible rotation basically specify a mapping and then you're going to say after the mapping my marginal distributions to match and then they just train that with ever Cyril training so like again like a loss to spur make sure that the marginal is specified
7,874
7,898
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7874s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
is the margin always matched and then after they do that but one of the results of possibly do to gain training is usually no very robust and high precision so after they do that they have they have found a rotation that roughly aligned to distributions and then after they have that they would select some top pairs of high-frequency words in those graph alignment and then
7,898
7,928
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7898s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
they would assume that they are actually gwangju for linemen and then you would use those as like actual pairs to to solve for the exact rotation and apparently this works really well so that's some of the data that you get from there's some additional tricks in terms of embedding nearest neighbor that I didn't go into so but this is the results that they have and they're
7,928
7,959
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7928s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
comparing that with cost lingo supervision and without any supervision which is their own method and it's really surprisingly they could get you competitive performance with using ground truth data of actual pairs so this again like this is not as complex as translating whole sentences this is only translating words but I still see this as very impressive that this can
7,959
7,987
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7959s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
work at all and become very competitive with supervised methods so then the next part of this is again another paper from facebook is now you can actually lavish all three of the core principles that we have covered so far you can look at so they use world level alignment meaning they started from what we just look at the end supervised level the unsupervised world
7,987
8,016
https://www.youtube.com/watch?v=PXOhi6m09bA&t=7987s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
level alignment so this is you're not just looking at sentence level you're looking inside sentence to sub component level statistics and then they also use a monolingual language models to make sure what you translate actually looks like a real sentence so that basically you can see that as marginal matching and then they also have this thing called black translation which is
8,016
8,039
https://www.youtube.com/watch?v=PXOhi6m09bA&t=8016s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
another variant of cycle consistency so you translate it from English to French and then French back to English you get you should get back the same sentence and this is a paper that essentially utilized all these three methods and then from there I think I think they get say of the art and supervised machine translation results that are I think ten you know I don't remember the precise
8,039
8,069
https://www.youtube.com/watch?v=PXOhi6m09bA&t=8039s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg