CHANNEL_NAME
stringclasses
2 values
URL
stringlengths
43
43
TITLE
stringlengths
18
100
DESCRIPTION
stringlengths
621
5k
TRANSCRIPTION
stringlengths
958
84.8k
SEGMENTS
stringlengths
1.51k
143k
Generative Models
https://www.youtube.com/watch?v=gwI6g1pBD84
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
#glide #openai #diffusion Diffusion models learn to iteratively reverse a noising process that is applied repeatedly during training. The result can be used for conditional generation as well as various other tasks such as inpainting. OpenAI's GLIDE builds on recent advances in diffusion models and combines text-conditional diffusion with classifier-free guidance and upsampling to achieve unprecedented quality in text-to-image samples. Try it yourself: https://huggingface.co/spaces/valhalla/glide-text2im OUTLINE: 0:00 - Intro & Overview 6:10 - What is a Diffusion Model? 18:20 - Conditional Generation and Guided Diffusion 31:30 - Architecture Recap 34:05 - Training & Result metrics 36:55 - Failure cases & my own results 39:45 - Safety considerations Paper: https://arxiv.org/abs/2112.10741 Code & Model: https://github.com/openai/glide-text2im More diffusion papers: https://arxiv.org/pdf/2006.11239.pdf https://arxiv.org/pdf/2102.09672.pdf Abstract: Diffusion models have recently been shown to generate high-quality synthetic images, especially when paired with a guidance technique to trade off diversity for fidelity. We explore diffusion models for the problem of text-conditional image synthesis and compare two different guidance strategies: CLIP guidance and classifier-free guidance. We find that the latter is preferred by human evaluators for both photorealism and caption similarity, and often produces photorealistic samples. Samples from a 3.5 billion parameter text-conditional diffusion model using classifier-free guidance are favored by human evaluators to those from DALL-E, even when the latter uses expensive CLIP reranking. Additionally, we find that our models can be fine-tuned to perform image inpainting, enabling powerful text-driven image editing. We train a smaller model on a filtered dataset and release the code and weights at this https URL. Authors: Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, Mark Chen Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://discord.gg/4H8xxDF BitChute: https://www.bitchute.com/channel/yannic-kilcher LinkedIn: https://www.linkedin.com/in/ykilcher BiliBili: https://space.bilibili.com/2017636191 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Hello there, today we'll look at Glide towards photorealistic image generation and editing with text-guided diffusion models by Alex Nicolle, Prafula Dhariawal, Aditya Ramesh and others of OpenAI. This paper on a high level, well, I'll just show you what you can do. I'm sure you've all seen this paper in one way or another. It is another paper that generates images given a piece of text. But this time, it's not a GAN or anything like this or a VQ VAE. This time, it is a diffusion model. This is a different class of models and we'll go into what they are and how they work. But essentially, you can see right here that the model that turns out of this and of course, this being OpenAI, they train this on a massive scale and this model is really big. But what comes out of it is very, very, very much better than for example, Dali, which always had this kind of blurriness to it. You can see right here a crayon drawing of a space elevator, pixel art, corgi pizza. So this is trained on a big scrape of images from the internet. And as you can see, the outputs are pretty stunning. So it gets for example, the shadows right here, it gets them correctly, even the red on blue blending. It gets different styles like the Salvador Dali style. It combines different concepts, although maybe you know, this has been seen on the internet somewhere, but it is able to combine different concepts. And given that these are diffusion models, you can actually do a bunch of more stuff with them. For example, in-painting is immediately accessible to this model. Now, usually, in-painting is accessible to diffusion models. However, they actually train an in-painting model on top of this. But in essence, a lot of stuff would be accessible. So this is now possible where you say, okay, I only want to change a part of the image like this part right here, you give a text saying, a man wearing a white hat, and the model generates the man wearing a white hat. This is very cool. You can do things like this, where you first, so the pictures here are a bit confusing, but you first generate an image from a text prompt, like a cozy living room, then you get this living room. And then here, the user would annotate this window sort of would draw over it, and will give the next text prompt. The next text prompt would be a painting of a corgi on the wall above the couch. And the model, it's an in, so this is the in-painting mode, the model would only be able to paint the green area. So it would sort of try to conform to the text using only the green area. And therefore, it would make this corgi picture on the wall right here, then the user goes further and says, well, now I'm going to paint this area right here. And I'm going to issue the prompt around coffee table in front of a couch, and the model will generate it and so on. You can see that this enables sort of an interactive creation of this scenery at the end, the couch, the couch in the corner of the room. So changing the entire wall right here, you can see the back of the room has some space. And now it's being changed to a wall. So this is the kind of stuff that's possible. Editing right here. Even what's this, this sort of sketch editing where you don't only mask, but along with the mask, you provide sort of like a sketch as you can see right here. So this part here is blue, and then the part here is white. And that's also the mask that the picture receives. And you can see that only one cloud in the sky today, it sort of you can guide even more. So you can guide with text, and you can guide with sketch color, and so on. So this is a very, very, very cool model, you can see the quality is very, very good. Here is for example, a comparison. These are real images from the MS Marco data set MS Coco, sorry. This is a data set of pictures with associated labels, so text descriptions of the picture. So you have some ground truth. So the ground truth here will be this one. And the label is a green train coming down the tracks. You can see Dali generates something neat, but it's sort of blurry. It's kind of cartoonish, as all the Dali pictures are, if you look in this row, the last one's pretty good, but all the other ones are sort of elephants are more like blobs. And we've seen this in the in the Dali paper, it was impressive at the time, but this is way more impressive. And then their best model this clip that sorry, this glide model with classifier free guidance, you can see right here, it generates like a high quality train that fits the image fits the image description. And you can see in the entire in the entire row right here, it's pretty good at doing that. So there are a lot of components to this model. And we're going to explore them a little bit. OpenAI has released in classic OpenAI fashion, they've released like a small, very filtered version of that model, because they're worried about safety, like anyone's going to believe them after GPT two, they've just been doing this every single model, right? They're just like, Oh, no safety, people can make deep fakes. Oh, no, like, no one's made a deep fake. Like GPT to all the worries, they were just not true. No one has used GPT to to spread around fake news. And no one like no one's going to use this model substantially to make very misleading pictures. But we'll get to that as well. Alright, so what is a diffusion model? And that's sort of at the core of this thing right here. A diffusion model is a different type of generative model than maybe you're used to from like a GAN or a VQ VAE. So in a GAN, a GAN is probably the closest right here. So again, it's sort of like a neural network with a bunch of layers. And what you do is you sample from some sort of a distribution, you sample some noise, right, you sample some noise, you get some noise vector. So here's a vector, which is complete noise, every entry is noise. You put it through the network, the network generates pretty picture. And you train the model using a discriminator. In this case, you train the model to produce pretty pictures given the noise and the noise act sort of as a source of randomness. So the mapping is clear, you train to map from noise to picture. Now, a diffusion model goes in almost like a different direction. So what you do is during training, you have a data set, and you take an image. So from from a data set, you have a data set, you take an image out of it. Let's say this is your trusty, trusty cat, and you're going to, you're going to put noise onto this image. So you're going to add noise and noise. Let's represent that with Sigma. No, I think they do, they do epsilon or eta in this in this paper right here. So you add that, and then you get a slightly noisy version of this. Let's just let's just wiggle a bit, wiggle, wiggle, wiggle, and you do it again. So through adding noise, and you add lots and lots and lots of noise, okay. So every time you add a tiny, tiny bit of noise, and that means that more and more your picture is just going to be blurry and blurry and blurry. Now, if you do this for long enough, in the limit, you can prove that obviously, if you do this infinitely many times, what comes out at the end is going to be just normally distributed. If your noise is normally distributed, and you scale every time correctly, then whatever turns out is going to be normally distributed with some parameters here. So this right here is going to be a known distribution. If you if you add noise for long enough, if you destroy all of the information that the picture has, then you'll end up with sort of an entry in a known distribution. However, every step that you do right here is very small, every step, you just add a little bit of noise. So technically, it's possible for a model to look at this picture right here, which is kind of a bit of a blurry version of the cat, and predict and learn to predict the more sharp version of the cat. Okay, this is a foundation of many, many sort of denoising models, many up sampling models, super resolution models, what have you, okay, they do this in one step. But essentially, here we say, the individual step is small enough such that the model can technically learn to reconstruct it. However, if we do it for long enough in you know, going to infinity, the we are at a known distribution, namely the the standard normal distribution. And these two things together mean that, well, if we have trained the model to reconstruct the individual steps, what we can technically do is we can now go ahead sample from this known distribution, right? Because ultimately, we want to sample from the data distribution, but that's hard because we don't know it. But here, we can just sample some noise, from a known distribution, then put it through this process of reconstruction, all the way all the steps that we did up here during training. During training, we just noise and noise and noise the images again and again and again, we trained the neural network to for every step to reconstruct the previous step. So we can now just put it through this series of trained neural networks. In fact, it's just going to be one neural network that gets the index of the step as a parameter and outcomes an image, right outcomes a true data image. If these two things up here hold, then this should be possible. This is the basis for these diffusion models. So specifically, given a sample, that's what they say here, given a sample from the data distribution, this is x zero. So this is the data distribution, we produce a Markov chain of latent variables, x one to xt, with everyone being a more noisy version, and xt finally being of a like a known distribution, because we do it infinitely, or a large number of times, by progressively adding Gaussian noise to the sample. So you can see right here, we take xt minus one, we scale it down a bit, because if you wouldn't do that, the sort of the image would just increase in scale over, because we just keep adding stuff. But this, it's just a rescaling, there's nothing more happening here. So we add noise, this here is the mean of a distribution, the covariance matrix here is a diagonal, which essentially means we just add a bit of noise of the scale of alpha t. No, sorry, we just add a bit of noise, we rescale by alpha t, which is a scaling factor. And that's how we obtain the next step, the xt. So again, we do this enough. So we take xt for the next step, we plug it in here, and then we obtain xt plus one, and so on. So if the magnitude of the noise added at each step is small enough, the posterior is well, well approximated by a diagonal Gaussian, that's what they say right here. So what does this mean? The posterior, it means that this is the reverse step, right? I have xt, and I'm looking to recreate xt minus one. So if the noise is small enough, then the posterior is well approximated by a diagonal Gaussian, and we have a hope to learn it with a neural network, right? Furthermore, if the magnitude of the total noise added throughout the chain is large enough, then the last step is well approximated by a known by a standard normal distribution. These properties suggest learning a model for this posterior, right, we have xt, we want to reconstruct xt minus one to approximate the true posterior. Okay, so we are going to learn a neural network that it doesn't exactly reconstruct the image. But this is a variational model. So what we're going to do is we're going to plug in xt into a neural network, the neural network is going to predict the mean and the covariance matrix of the next step of the chain of the next step of the denoising chain. And then we can use this to produce samples, we simply, sorry, we start, we start with Gaussian noise, which is the end, and we gradually reduce the noise in a sequence of steps until we are at the data distribution, or at least the predicted data distribution. So this is not a new idea. This has been and I think I have the references open, this has been explored previously, for example, this just an example right here. denoising diffusion probabilistic models is one of the papers that introduced lots of these things you can see right here. These have still been trained on like, just images as such. So this is the left is trained on a face data set. The right is trained on CIFAR 10. This is unconditional generation without the text prompt or anything like this. But you can see the same principle applies, we simply add noise during training, and we learn a neural network to remove the noise to predict what the image would look like one noise step less. Here already, there was an invention that the paper here would make use of namely the loss function right here, we're going to look at that in just a second. So that's that's the second. So they say, while there exists a tractable variational lower bound, better results arise from optimizing a surrogate objective, which reweighs the term in the variational lower bound. So the loss we're going to optimize right here is during training, if you can see right here what during training, we, we train the neural network to reconstruct one of these steps, right, each sample in training is going to be some image x t minus one, and some image x t. And we're going to reconstruct, we're going to train the neural network to predict x t minus one from x t or the variational sort of the distribution of that. So this is a training sample. Now, how do we get the training sample, what we can do is we can take x zero right here, and we could go through and add and add and add noise. But since we always add Gaussian noise, we can simply do this in one step. There's nothing depending intermediately right here. So we do it in one step right here. And then we add another bit of noise. That's how we get the two samples. And then rather than predicting the image itself, what these models do is they will predict the noise. So what we actually predict is going to be the noise, the noise epsilon here, which we can calculate by x t minus x t minus one. So this is our prediction target, this is our loss function, the network is supposed to output this right here. And of course, we know the true one, you can see the network will try to output this given x t and an index into which step it is. So we're going to tell the network by the way, here's the noise. Here's the number of steps we're into this process. And we're going to train the network to read to say, what was the noise that was added, it's a bit easier, just, I think it's just like a scaling, scaling property, because this is going to have sort of zero mean and unit variance. So it's easier to predict for a neural network. So that is one of that is very standard in diffusion models. The next thing they introduce is guided diffusion. By the way, they also mentioned somewhere that they, they learn the covariance matrix. Yes, there's another paper that also learns the covariance matrix, this first paper just fixed it at a diagonal. But then there is another paper that improved upon that, called improved denoting diffusion probabilistic model, interestingly, by the same authors here. And they, they show a method to learn this covariance matrix, which is mostly a scaling issue, because there is a narrow band that is a valid covariance matrix. And they show with the correct parameterization, they can in fact learn it and get better, better performance. But this just for reference, it's not super important right here. The second part is more important. So this is guided diffusion. So what we can do here is we can build a model, let's just assume we have images and we have class labels for the images, let's leave away the text right now. Okay, so we have a class label for here. So this has a class label of cat, for example, there's also dog and so on. So what we can do is we can train the neural network here, you know, each step, we train it to reconstruct one step. So that's going to predict the noise that was added, given the image xt, given the index t. What we can also do is we can say, by the way, it's also, we give it the label y. So y, in this case is cat. So we can train a class conditional model. And that, you know, has some, some advantages, we know class conditional GANs work quite well. So if you give it the class label as an input, you can often improve that. And you would do that by either embedding the class label as a one hot vector into the network or something like this. Now with a text model, it's a bit more tricky, right? But what you can do is you, let's say this here, this here is some sort of a neural network, right? So xt goes in, this is xt goes into an encoder with a bunch of layers, maybe the t itself also goes in here as some sort of a float or an embedding a one hot vector or something like this. And the class label could also go in here, right? However, if you have text, what you can do is let's say you don't have this, but now you have a text description, they call this C. So you can first put the text description to through and it's own network, and then combine the embeddings. So either put the embeddings here as sort of a class embedding, or you can put the embeddings into each layer right here in this stack. And I think they do both. In any case, you can embed the text right here of the image, because their data set always has images and text together. So that's what I said at the beginning. So you can take this text, you can put it through an encoder itself, you can input it into this process right here. This is the network that is going to ultimately predict the added noise given an image. And yeah, the network can take inspiration and take can learn from the text. So if it sees this picture right here, for example, but in a very noisy way, and it has the text information, a couch in the corner of a room, it's obviously going to perform better than if it wouldn't have the text. And ultimately, that's going to unlock the capability that we can input a text at the very beginning, and then the model guided by this text will produce a living room, sorry, a couch in the corner of a room. So now, is this enough? And the answer is not yet. So class conditional models are working fine. However, it's better if you do what's called guided diffusion. So in guided diffusion, we not only want to make our models class conditional, but we want to guide them even more, we want to push them into a direction. And this is called guided diffusion. And one way to do it is to say, well, I have an additional classifier, I have a classifier, for example, an ImageNet classifier, right? And if I want to push my diffusion process towards a particular label, I can take that ImageNet classifier, and I can go along the gradient of that. This is very much like things like deep dream work, or this is essentially clip guided diffusion is this but with clip. So I have the clip model. And if you don't know what the clip model is, this is a model where you input an image and a piece of text, and it tells you how good, how good do the so let's put that a sigma is do these two things fit together well or not. Now, if you think about the gradient of this with respect to the image, then you can see that you can push the diffusion process into a direction. So this is one way of doing it. But it means that you have to have some sort of an external classifier to go by. There is also a method called classifier free guidance. And this was introduced by Hoagy, who is a famous author, and he's a very famous author. And he's a very famous author, and he's a very famous author. And he's a very famous author. And he's a very famous author. And he gives us a classifier free guidance. And this was introduced by Hoagy and Salomon. And this is where you sort of use the models own knowledge about its class conditioning in order to do this guidance. And this is a bit weird. And I feel like, I feel like, I feel like this And I feel the fact that this works appears to be a little bit of just a hint that our current models aren't making use of the data fully because we have to do these tricks at inference time. So it's more pointing towards us not really being the masters of these technologies yet, rather than this being some sort of an intrinsically good thing to do. But essentially what we want to do is during training, we train these class conditional things, right? We train, let's produce the noise that was added to xt in the last step conditioned on y and y here could be a class label, y could be the input text, y could be, you know, pretty much any conditioning information. And then we also alongside that, sometimes we don't provide that label at all. We don't just don't provide the label, which essentially means that we are training an unconditional generator. So we just simply forget the fact that we have labels. We simply train the image generation model unconditional. So we just give the model xt, we ask, here is just some image without description, without nothing, what was the noise added to this image? And now at inference, so we just train the model in both ways. During training, we sometimes just leave away the label. This could be beneficial, as this part, in fact, would be the opportunity to bring more data into the picture, right? Let's say I have only part of my data is labeled and part of my data is unlabeled. We could actually in here, bring in the unlabeled data, and therefore get more data into the system than we usually had. But given that they probably have enough data with their giant image caption data set here. By the way, it's the same data set they used for Dali. Given that it's probably they just leave away the text at during training for some of the, they say right here, unlabeled with a fixed probability during training. Now during inference, you can do something with that. What you can do during inference, you can say, well, if I am in the situation where I have an image and a label, and I asked my model to generate the noise, what I can do is I can do a little bit like the same thing I did with the clip guiding. So here I let my model predict the un-noised version, but I also push it into the direction that clip tells me would be a good image. So it's two things. This is given the image, what would be the un-noisy or the less noisy version. And this one would be, well, in general, which image would be sort of appropriate for this piece of text. It makes the two objectives. This is very much the same. So if you unpack this, you can see that this right here, unconditionally asks, given this image, which is the less noisy version of the image or give me the noise that is, that was added to the image. And then you push it into this direction right here. And you can see this is the difference between the noise that the model predicts unconditionally and the noise that the model predicts conditioned on the label. So this is a direction, this direction points very much into the direction of the noise that was specifically added to the label. Right. So it's the difference between the conditional and unconditional prediction. We add that to the predicted noise right here. So the model predicts, OK, this is the noise that was added and the conditional model predicts this one. And then we simply push the prediction into this direction. You can see right here, there's a scalar S involved. S obviously must be larger than one because if S is smaller, like this is what we would predict, usually the conditional one. So now if S is larger than one, we're going to predict something more up here. And notice the difference. If we didn't have this, if we didn't have this, we would simply predict this point right here. We wouldn't know which one, which direction was a better direction because we also have the unconditional point right here. We can clearly say that this direction is probably the direction that goes into the direction of the conditioning information. So we can choose to sort of overdo it. Again, I think that is that's kind of a trick around the fact that we don't know. We don't know how to handle the information very well quite yet. I'm not sure about it. It seems like you wouldn't even have to do this necessarily. What you could also do if you want to go further, you could take sort of inspiration from the contrastive learning communities and maybe do some hard, some, you could also replace this part and this part, by the way. So these parts you could replace sort of by an expectation of these noises over some labels, y hat or y prime. So which means you could just sample some other text or some other conditioning information randomly and get an expectation. You could also do hard negative sampling. So you could take labels that are fairly close or you could take labels that are kind of confusing. And try to differentiate yourself. There's a lot of possibilities here. I can see that, but still it feels like a bit of a trick. Yeah. So good. That's what they do. They do clip guidance. So they do this classifier free guidance, which turns out to be the better variant. And they also do the clip guidance, which is what we discussed before, except with clip. You can see they've just replaced the gradient of a classifier with the gradient of the clip model. The clip model is simply an inner product between an embedding of the image and embedding of the text. And they say the reason probably that the classifier free guidance works better is because the clip sort of the diffusion models, what they do is they find like adversarial examples to clip and not necessarily good pictures. Now, I don't know if the classifier free guidance would also be something that could replace sort of the current notebooks that are flying around where clip is used, clip guided diffusion and VQGAN plus clip. But I'm not sure because the VQGAN, it seems already restricts the space of images such that it's not that easy to find adversarial examples because it always has to go through the vector quantization. OK, that's the model. Like the model is nothing else. It's a diffusion model. All right. This has existed before. It is conditioned on conditioning information. The diffusion model itself is conditioned in this case on text that goes through a transformer encoder, which is the blue thing right here. This embeddings are then sort of concatenated into the process of this diffusion model. The diffusion model is a model that for one of these steps predicts sort of tries to predict the reverse. It's the same model for each step. It just gets as an additional conditioning information which step it's currently trying to reconstruct. It always reconstructs the noise that was added. Training data generation is pretty easy. You simply add noise to an image and then you add a bit more. And then the difference between that is the target to predict. Then at inference time, at inference time, they also do this guided diffusion. That's either going to be achieved by CLIP and the disadvantage of that is that you have to have an additional classifier like CLIP. Not only that, but in fact, the classifier has also have to been trained on noisy images because otherwise noisy images are going to be out of its distribution. So they do in fact train noised CLIP versions. The disadvantage, as I said, is you need these additional model that's trained on noisy data. The advantage is that you get to bring additional information here. You get to essentially potentially even bring additional data sets that was used to train these other classifiers. You can use multiple classifiers, whatever. They also do classifier free guidance. These two things, they don't use them together. CLIP guidance and classifier free. They use them either or. The classifier free guidance is more like a hack where you alongside the conditional denoising train an unconditional denoising. So you train the model also to sometimes not be conditioned and then you push it into the direction away from the unconditioned towards the conditioned and beyond to make it extra conditioned, I guess. The disadvantage here is that seems like a hack. The advantage is that there's potential maybe to do some hard negative sampling and also it doesn't require an extra model on the side. And also in the unconditional training, you might bring in additional data that has no label. So training happens. It's a 3.5 billion parameter text conditional diffusion model at 64 by 64 resolution. This is way smaller than DALI, by the way. And this is cool. And a 1.5 billion parameter text conditional up sampling diffusion model to increase the resolution. So it's a two stage process. The diffusion model itself is at a 64 by 64 resolution. And then they have an up sampling models. It's also text conditional, but it is. So this is purely an diffusion up sampling model. It's very much the same principle, except that it now doesn't go. It doesn't go from noisy image or sorry, from from pure noise to image. It goes from low resolution image to high resolution image. And alongside of that, they train a noised clip model, which is the classifier that they're going to need to do guidance. Well, they describe here a little bit of the architectures. We're not super interested. At least I'm not super interested in the architectures. They're way big models. As I said, they release the small models. They don't release the big models. And they explicitly train for inpainting, even though you could do it with diffusion models without training. But they say if you train it, it behaves a bit better. So during training, they would sort of mask out random parts of the images and then use diffusion to reconstruct those. And yeah, the results are the results that we've already seen. These are pretty interesting. They do studies with it. So they do studies on these data sets. So as they increase the guidance scales, they the guidance scales are like the only thing, the only handle they have at inference time. That to trade off, to trade off diversity and sort of adherence to the data set. And it turns out that the classifier-free guidance, as you can see right here, is behaving better. This is the frontier right here. These always trade off two different metrics in the MS COCO data set here. Precision recall, here inception score and FID. And you can see the only time the clip guidance is better than classifier-free guidance is when you directly look at the clip score. That's why they say probably the clip guidance simply finds adversarial examples towards clip. They also let humans rate the pictures in terms of photorealism and caption similarity. And you can see that the classifier-free guidance wins both times. And that's pretty much it. They show some failure cases, which I also find pretty interesting. So an illustration of a cat that has eight legs is not a thing. Bicycle that has continuous tracks instead of wheels. It seemed like it seemed a bit like Dali as a model was more sort of sensitive or was more respondent to text itself. So to the prompt, whereas here it seems it's more like generating realistic images that has some sort of the words. So the words kind of match with the text. A mouse hunting a lion, not happening. Also a car with triangular wheels, also not happening. As you can see, I myself have tried the small model a little bit. And you can see, you can try it yourself. I'll put a link up. There is a Gradio space by the user Valhalla. Thanks a lot for creating that. So here is balloon race. You can see that works pretty well. A drawing of a tiny house. That's also OK. A hidden treasure on a tropical island. And I mean, it's a tropical island, right? But yeah, all the elephants had left a long time ago. Now only a few vultures remain and it's just kind of a bunch of elephants. So, well, the elephants are kind of walking away a little bit. Right. Yeah. Attention is all you need. Obviously, oddly Russian, Russian vibes from this picture. And this one is glory to the party. And I guess party is just sort of equated with birthday cake or so. So the sort of text sensitivity of this model might not be as good, but there might be opportunity to fiddle here. The samples as such, they look they look pretty, pretty cool. It's also not clear how much of a difference this is between the small model and the large model or how much effort into diffusion is put. They also say they they they release the model they release is sort of a model on a filtered version of a data set. And the filtered version removes, for example, removes hate symbols and anything to do with people. So they say it's not as easy to generate deepfakes. Yeah. And where was. Yeah, I think the the coolest one is where you can do this interactively. That is that is a pretty cool one. I want to look at lastly, where sorry for the scrolling around safety consideration. So there's so like they say, as a result, releasing our model without safeguards would significantly reduce skills required to create convincing disinformation or deepfakes. And they say they only release the small model. They say this somewhere. Where is it? Well, in any case, they only release a small model. But I just want everyone to remember GPT-2. And it was exactly the same. And to my knowledge, there's there's not the world is not in chaos right now because people have used GPT-2, which is sort of public by now and can be easily trained by anyone. The world is not in chaos, because people have access to GPT-2. It's, it's not the case. And I don't know why they do it because for PR reasons, or because they want to kind of sell it, sell the larger model, sell access to it. I mean, that's all fine. But don't tell me this is safety considerations. And yeah, the fact is, people are going to create deepfakes in the future, it's going to be easier. But it's kind of we have to, the answer is not to not release the models and techniques. The answer is to educate people that, hey, look, not everything you see on a on a picture, especially if it looks like it's up sampled from two from 64 by 64. Not everything you see on there might be entirely real, right? Things can be altered, things can be photoshopped, things can be created like this. It's the same as people have learned that not everything that's written in an email is true. And people will simply have to adapt, that's going to be the only way. Not giving people access to these things seems to be kind of futile. But as I said, I don't believe for a second that actual safety considerations were the reason for this. In any case, let me know what you think. And that was it for me. Try out the model and maybe you'll find something cool. Bye bye.
[{"start": 0.0, "end": 7.12, "text": " Hello there, today we'll look at Glide towards photorealistic image generation and editing"}, {"start": 7.12, "end": 13.68, "text": " with text-guided diffusion models by Alex Nicolle, Prafula Dhariawal, Aditya Ramesh"}, {"start": 13.68, "end": 19.44, "text": " and others of OpenAI. This paper on a high level, well, I'll just show you what you can"}, {"start": 19.44, "end": 25.76, "text": " do. I'm sure you've all seen this paper in one way or another. It is another paper that"}, {"start": 25.76, "end": 32.64, "text": " generates images given a piece of text. But this time, it's not a GAN or anything like this or a"}, {"start": 32.64, "end": 40.08, "text": " VQ VAE. This time, it is a diffusion model. This is a different class of models and we'll go into"}, {"start": 40.08, "end": 46.08, "text": " what they are and how they work. But essentially, you can see right here that the model that turns"}, {"start": 46.08, "end": 52.160000000000004, "text": " out of this and of course, this being OpenAI, they train this on a massive scale and this model"}, {"start": 52.16, "end": 59.76, "text": " is really big. But what comes out of it is very, very, very much better than for example, Dali,"}, {"start": 59.76, "end": 68.56, "text": " which always had this kind of blurriness to it. You can see right here a crayon drawing of a space"}, {"start": 68.56, "end": 76.56, "text": " elevator, pixel art, corgi pizza. So this is trained on a big scrape of images from the internet."}, {"start": 76.56, "end": 82.32000000000001, "text": " And as you can see, the outputs are pretty stunning. So it gets for example, the shadows"}, {"start": 82.32000000000001, "end": 89.84, "text": " right here, it gets them correctly, even the red on blue blending. It gets different styles like the"}, {"start": 90.72, "end": 97.68, "text": " Salvador Dali style. It combines different concepts, although maybe you know, this has been"}, {"start": 97.68, "end": 103.2, "text": " seen on the internet somewhere, but it is able to combine different concepts. And given that these"}, {"start": 103.2, "end": 109.52000000000001, "text": " are diffusion models, you can actually do a bunch of more stuff with them. For example, in-painting"}, {"start": 109.52000000000001, "end": 116.48, "text": " is immediately accessible to this model. Now, usually, in-painting is accessible to diffusion"}, {"start": 116.48, "end": 123.28, "text": " models. However, they actually train an in-painting model on top of this. But in essence, a lot of"}, {"start": 123.28, "end": 129.44, "text": " stuff would be accessible. So this is now possible where you say, okay, I only want to change a part"}, {"start": 129.44, "end": 135.76, "text": " of the image like this part right here, you give a text saying, a man wearing a white hat, and the"}, {"start": 135.76, "end": 143.28, "text": " model generates the man wearing a white hat. This is very cool. You can do things like this, where"}, {"start": 143.28, "end": 148.8, "text": " you first, so the pictures here are a bit confusing, but you first generate an image from a"}, {"start": 148.8, "end": 154.4, "text": " text prompt, like a cozy living room, then you get this living room. And then here, the user would"}, {"start": 154.4, "end": 159.52, "text": " annotate this window sort of would draw over it, and will give the next text prompt. The next text"}, {"start": 159.52, "end": 166.8, "text": " prompt would be a painting of a corgi on the wall above the couch. And the model, it's an in, so this"}, {"start": 166.8, "end": 172.8, "text": " is the in-painting mode, the model would only be able to paint the green area. So it would sort of"}, {"start": 172.8, "end": 180.96, "text": " try to conform to the text using only the green area. And therefore, it would make this corgi"}, {"start": 180.96, "end": 185.28, "text": " picture on the wall right here, then the user goes further and says, well, now I'm going to"}, {"start": 185.28, "end": 191.04000000000002, "text": " paint this area right here. And I'm going to issue the prompt around coffee table in front of a couch,"}, {"start": 191.04000000000002, "end": 196.0, "text": " and the model will generate it and so on. You can see that this enables sort of an interactive"}, {"start": 196.0, "end": 202.8, "text": " creation of this scenery at the end, the couch, the couch in the corner of the room. So changing"}, {"start": 202.8, "end": 207.92000000000002, "text": " the entire wall right here, you can see the back of the room has some space. And now it's being"}, {"start": 207.92, "end": 216.39999999999998, "text": " changed to a wall. So this is the kind of stuff that's possible. Editing right here. Even what's"}, {"start": 216.39999999999998, "end": 221.35999999999999, "text": " this, this sort of sketch editing where you don't only mask, but along with the mask, you provide"}, {"start": 221.35999999999999, "end": 226.48, "text": " sort of like a sketch as you can see right here. So this part here is blue, and then the part here"}, {"start": 226.48, "end": 237.2, "text": " is white. And that's also the mask that the picture receives. And you can see that only one cloud in"}, {"start": 237.2, "end": 244.0, "text": " the sky today, it sort of you can guide even more. So you can guide with text, and you can guide with"}, {"start": 244.0, "end": 253.67999999999998, "text": " sketch color, and so on. So this is a very, very, very cool model, you can see the quality is very,"}, {"start": 253.67999999999998, "end": 261.59999999999997, "text": " very good. Here is for example, a comparison. These are real images from the MS Marco data set MS Coco,"}, {"start": 261.6, "end": 267.52000000000004, "text": " sorry. This is a data set of pictures with associated labels, so text descriptions of the"}, {"start": 267.52000000000004, "end": 274.32000000000005, "text": " picture. So you have some ground truth. So the ground truth here will be this one. And the label"}, {"start": 274.32000000000005, "end": 283.20000000000005, "text": " is a green train coming down the tracks. You can see Dali generates something neat, but it's sort"}, {"start": 283.20000000000005, "end": 288.56, "text": " of blurry. It's kind of cartoonish, as all the Dali pictures are, if you look in this row,"}, {"start": 288.56, "end": 293.84, "text": " the last one's pretty good, but all the other ones are sort of elephants are more like blobs."}, {"start": 294.72, "end": 300.24, "text": " And we've seen this in the in the Dali paper, it was impressive at the time, but this is way more"}, {"start": 300.24, "end": 306.96, "text": " impressive. And then their best model this clip that sorry, this glide model with classifier free"}, {"start": 306.96, "end": 313.44, "text": " guidance, you can see right here, it generates like a high quality train that fits the image"}, {"start": 313.44, "end": 319.52, "text": " fits the image description. And you can see in the entire in the entire row right here,"}, {"start": 319.52, "end": 324.4, "text": " it's pretty good at doing that. So there are a lot of components to this model. And we're going to"}, {"start": 324.4, "end": 330.72, "text": " explore them a little bit. OpenAI has released in classic OpenAI fashion, they've released like a"}, {"start": 330.72, "end": 336.15999999999997, "text": " small, very filtered version of that model, because they're worried about safety, like anyone's going"}, {"start": 336.15999999999997, "end": 341.44, "text": " to believe them after GPT two, they've just been doing this every single model, right? They're just"}, {"start": 341.44, "end": 350.16, "text": " like, Oh, no safety, people can make deep fakes. Oh, no, like, no one's made a deep fake. Like GPT"}, {"start": 350.16, "end": 357.04, "text": " to all the worries, they were just not true. No one has used GPT to to spread around fake news."}, {"start": 357.6, "end": 364.96, "text": " And no one like no one's going to use this model substantially to make very misleading pictures."}, {"start": 364.96, "end": 371.68, "text": " But we'll get to that as well. Alright, so what is a diffusion model? And that's sort of at the core"}, {"start": 371.68, "end": 379.76, "text": " of this thing right here. A diffusion model is a different type of generative model than maybe you're"}, {"start": 379.76, "end": 388.24, "text": " used to from like a GAN or a VQ VAE. So in a GAN, a GAN is probably the closest right here. So again,"}, {"start": 388.24, "end": 393.52, "text": " it's sort of like a neural network with a bunch of layers. And what you do is you sample from"}, {"start": 393.52, "end": 397.12, "text": " some sort of a distribution, you sample some noise, right, you sample some noise, you get some"}, {"start": 397.12, "end": 403.76, "text": " noise vector. So here's a vector, which is complete noise, every entry is noise. You put it through"}, {"start": 403.76, "end": 409.59999999999997, "text": " the network, the network generates pretty picture. And you train the model using a discriminator. In"}, {"start": 409.59999999999997, "end": 415.76, "text": " this case, you train the model to produce pretty pictures given the noise and the noise act sort of"}, {"start": 415.76, "end": 424.96, "text": " as a source of randomness. So the mapping is clear, you train to map from noise to picture. Now,"}, {"start": 424.96, "end": 432.88, "text": " a diffusion model goes in almost like a different direction. So what you do is during training, you"}, {"start": 432.88, "end": 440.8, "text": " have a data set, and you take an image. So from from a data set, you have a data set, you take an"}, {"start": 440.8, "end": 451.52000000000004, "text": " image out of it. Let's say this is your trusty, trusty cat, and you're going to, you're going to"}, {"start": 451.52000000000004, "end": 458.88, "text": " put noise onto this image. So you're going to add noise and noise. Let's represent that with"}, {"start": 458.88, "end": 466.0, "text": " Sigma. No, I think they do, they do epsilon or eta in this in this paper right here. So you add"}, {"start": 466.0, "end": 474.32, "text": " that, and then you get a slightly noisy version of this. Let's just let's just wiggle a bit, wiggle,"}, {"start": 474.4, "end": 481.76, "text": " wiggle, wiggle, and you do it again. So through adding noise, and you add lots and lots and lots"}, {"start": 481.76, "end": 488.4, "text": " of noise, okay. So every time you add a tiny, tiny bit of noise, and that means that more and more"}, {"start": 488.4, "end": 494.0, "text": " your picture is just going to be blurry and blurry and blurry. Now, if you do this for long enough,"}, {"start": 494.0, "end": 501.2, "text": " in the limit, you can prove that obviously, if you do this infinitely many times, what comes out at"}, {"start": 501.2, "end": 508.24, "text": " the end is going to be just normally distributed. If your noise is normally distributed, and you"}, {"start": 508.24, "end": 516.16, "text": " scale every time correctly, then whatever turns out is going to be normally distributed with some"}, {"start": 516.16, "end": 523.6, "text": " parameters here. So this right here is going to be a known distribution. If you if you add noise"}, {"start": 523.6, "end": 529.6800000000001, "text": " for long enough, if you destroy all of the information that the picture has, then you'll end"}, {"start": 529.6800000000001, "end": 539.2, "text": " up with sort of an entry in a known distribution. However, every step that you do right here is very"}, {"start": 539.2, "end": 545.2, "text": " small, every step, you just add a little bit of noise. So technically, it's possible for a model"}, {"start": 545.2, "end": 550.8000000000001, "text": " to look at this picture right here, which is kind of a bit of a blurry version of the cat, and"}, {"start": 550.8, "end": 559.4399999999999, "text": " predict and learn to predict the more sharp version of the cat. Okay, this is a foundation of"}, {"start": 559.4399999999999, "end": 565.52, "text": " many, many sort of denoising models, many up sampling models, super resolution models, what"}, {"start": 565.52, "end": 571.1999999999999, "text": " have you, okay, they do this in one step. But essentially, here we say, the individual step"}, {"start": 571.52, "end": 579.8399999999999, "text": " is small enough such that the model can technically learn to reconstruct it. However, if we do it for"}, {"start": 579.84, "end": 587.52, "text": " long enough in you know, going to infinity, the we are at a known distribution, namely the the"}, {"start": 587.52, "end": 594.0, "text": " standard normal distribution. And these two things together mean that, well, if we have trained the"}, {"start": 594.0, "end": 599.6, "text": " model to reconstruct the individual steps, what we can technically do is we can now go ahead sample"}, {"start": 599.6, "end": 603.76, "text": " from this known distribution, right? Because ultimately, we want to sample from the data"}, {"start": 603.76, "end": 609.12, "text": " distribution, but that's hard because we don't know it. But here, we can just sample some noise,"}, {"start": 609.12, "end": 615.52, "text": " from a known distribution, then put it through this process of reconstruction, all the way all the"}, {"start": 615.52, "end": 621.52, "text": " steps that we did up here during training. During training, we just noise and noise and noise the"}, {"start": 621.52, "end": 627.28, "text": " images again and again and again, we trained the neural network to for every step to reconstruct"}, {"start": 627.28, "end": 632.16, "text": " the previous step. So we can now just put it through this series of trained neural networks."}, {"start": 632.16, "end": 637.6800000000001, "text": " In fact, it's just going to be one neural network that gets the index of the step as a parameter"}, {"start": 637.68, "end": 646.0, "text": " and outcomes an image, right outcomes a true data image. If these two things up here hold,"}, {"start": 646.0, "end": 653.04, "text": " then this should be possible. This is the basis for these diffusion models. So specifically,"}, {"start": 655.1999999999999, "end": 660.0799999999999, "text": " given a sample, that's what they say here, given a sample from the data distribution,"}, {"start": 660.64, "end": 666.7199999999999, "text": " this is x zero. So this is the data distribution, we produce a Markov chain of latent variables,"}, {"start": 666.72, "end": 675.36, "text": " x one to xt, with everyone being a more noisy version, and xt finally being of a like a known"}, {"start": 675.36, "end": 681.12, "text": " distribution, because we do it infinitely, or a large number of times, by progressively adding"}, {"start": 681.12, "end": 688.08, "text": " Gaussian noise to the sample. So you can see right here, we take xt minus one, we scale it down a"}, {"start": 688.08, "end": 693.6, "text": " bit, because if you wouldn't do that, the sort of the image would just increase in scale over,"}, {"start": 693.6, "end": 699.44, "text": " because we just keep adding stuff. But this, it's just a rescaling, there's nothing more happening"}, {"start": 699.44, "end": 712.0, "text": " here. So we add noise, this here is the mean of a distribution, the covariance matrix here is a"}, {"start": 712.0, "end": 722.4, "text": " diagonal, which essentially means we just add a bit of noise of the scale of alpha t. No, sorry,"}, {"start": 722.4, "end": 727.68, "text": " we just add a bit of noise, we rescale by alpha t, which is a scaling factor. And that's how we"}, {"start": 727.68, "end": 735.28, "text": " obtain the next step, the xt. So again, we do this enough. So we take xt for the next step,"}, {"start": 735.28, "end": 743.6, "text": " we plug it in here, and then we obtain xt plus one, and so on. So if the magnitude of the noise added"}, {"start": 743.6, "end": 751.12, "text": " at each step is small enough, the posterior is well, well approximated by a diagonal Gaussian,"}, {"start": 751.12, "end": 755.92, "text": " that's what they say right here. So what does this mean? The posterior, it means that this is"}, {"start": 755.92, "end": 764.32, "text": " the reverse step, right? I have xt, and I'm looking to recreate xt minus one. So if the noise is small"}, {"start": 764.32, "end": 772.16, "text": " enough, then the posterior is well approximated by a diagonal Gaussian, and we have a hope to learn"}, {"start": 772.16, "end": 778.64, "text": " it with a neural network, right? Furthermore, if the magnitude of the total noise added throughout"}, {"start": 778.64, "end": 786.56, "text": " the chain is large enough, then the last step is well approximated by a known by a standard normal"}, {"start": 786.56, "end": 793.28, "text": " distribution. These properties suggest learning a model for this posterior, right, we have xt,"}, {"start": 793.28, "end": 800.08, "text": " we want to reconstruct xt minus one to approximate the true posterior. Okay, so we are going to learn"}, {"start": 800.08, "end": 806.24, "text": " a neural network that it doesn't exactly reconstruct the image. But this is a variational"}, {"start": 806.24, "end": 810.96, "text": " model. So what we're going to do is we're going to plug in xt into a neural network, the neural"}, {"start": 810.96, "end": 816.96, "text": " network is going to predict the mean and the covariance matrix of the next step of the chain of"}, {"start": 816.96, "end": 822.88, "text": " the next step of the denoising chain. And then we can use this to produce samples, we simply,"}, {"start": 824.24, "end": 833.76, "text": " sorry, we start, we start with Gaussian noise, which is the end, and we gradually reduce the"}, {"start": 833.76, "end": 840.4, "text": " noise in a sequence of steps until we are at the data distribution, or at least the predicted data"}, {"start": 840.4, "end": 846.88, "text": " distribution. So this is not a new idea. This has been and I think I have the references open, this"}, {"start": 846.88, "end": 852.0, "text": " has been explored previously, for example, this just an example right here. denoising diffusion"}, {"start": 852.0, "end": 857.28, "text": " probabilistic models is one of the papers that introduced lots of these things you can see right"}, {"start": 857.28, "end": 864.48, "text": " here. These have still been trained on like, just images as such. So this is the left is trained on"}, {"start": 864.48, "end": 870.56, "text": " a face data set. The right is trained on CIFAR 10. This is unconditional generation without the text"}, {"start": 870.56, "end": 877.04, "text": " prompt or anything like this. But you can see the same principle applies, we simply add noise during"}, {"start": 877.04, "end": 883.04, "text": " training, and we learn a neural network to remove the noise to predict what the image would look"}, {"start": 883.04, "end": 893.1999999999999, "text": " like one noise step less. Here already, there was an invention that the paper here would make use"}, {"start": 893.1999999999999, "end": 901.1999999999999, "text": " of namely the loss function right here, we're going to look at that in just a second. So that's"}, {"start": 901.1999999999999, "end": 906.9599999999999, "text": " that's the second. So they say, while there exists a tractable variational lower bound, better results"}, {"start": 906.96, "end": 913.0400000000001, "text": " arise from optimizing a surrogate objective, which reweighs the term in the variational lower bound."}, {"start": 913.0400000000001, "end": 919.6, "text": " So the loss we're going to optimize right here is during training, if you can see right here what"}, {"start": 919.6, "end": 925.9200000000001, "text": " during training, we, we train the neural network to reconstruct one of these steps, right, each"}, {"start": 925.9200000000001, "end": 934.24, "text": " sample in training is going to be some image x t minus one, and some image x t. And we're going to"}, {"start": 934.24, "end": 939.92, "text": " reconstruct, we're going to train the neural network to predict x t minus one from x t or the"}, {"start": 939.92, "end": 947.6800000000001, "text": " variational sort of the distribution of that. So this is a training sample. Now, how do we get the"}, {"start": 947.6800000000001, "end": 953.52, "text": " training sample, what we can do is we can take x zero right here, and we could go through and add"}, {"start": 953.52, "end": 960.72, "text": " and add and add noise. But since we always add Gaussian noise, we can simply do this in one step."}, {"start": 960.72, "end": 966.96, "text": " There's nothing depending intermediately right here. So we do it in one step right here. And"}, {"start": 966.96, "end": 972.96, "text": " then we add another bit of noise. That's how we get the two samples. And then rather than predicting"}, {"start": 972.96, "end": 979.6, "text": " the image itself, what these models do is they will predict the noise. So what we actually predict"}, {"start": 979.6, "end": 987.84, "text": " is going to be the noise, the noise epsilon here, which we can calculate by x t minus x t minus one."}, {"start": 987.84, "end": 994.5600000000001, "text": " So this is our prediction target, this is our loss function, the network is supposed to output this"}, {"start": 994.5600000000001, "end": 1002.48, "text": " right here. And of course, we know the true one, you can see the network will try to output this"}, {"start": 1002.48, "end": 1008.5600000000001, "text": " given x t and an index into which step it is. So we're going to tell the network by the way,"}, {"start": 1009.12, "end": 1016.48, "text": " here's the noise. Here's the number of steps we're into this process. And we're going to"}, {"start": 1016.48, "end": 1022.64, "text": " train the network to read to say, what was the noise that was added, it's a bit easier, just,"}, {"start": 1022.64, "end": 1028.72, "text": " I think it's just like a scaling, scaling property, because this is going to have sort of zero mean"}, {"start": 1028.72, "end": 1038.96, "text": " and unit variance. So it's easier to predict for a neural network. So that is one of that is very"}, {"start": 1038.96, "end": 1049.52, "text": " standard in diffusion models. The next thing they introduce is guided diffusion. By the way,"}, {"start": 1050.24, "end": 1056.08, "text": " they also mentioned somewhere that they, they learn the covariance matrix. Yes, there's another"}, {"start": 1056.08, "end": 1062.96, "text": " paper that also learns the covariance matrix, this first paper just fixed it at a diagonal. But then"}, {"start": 1062.96, "end": 1069.44, "text": " there is another paper that improved upon that, called improved denoting diffusion probabilistic"}, {"start": 1069.44, "end": 1077.3600000000001, "text": " model, interestingly, by the same authors here. And they, they show a method to learn this"}, {"start": 1077.3600000000001, "end": 1082.8, "text": " covariance matrix, which is mostly a scaling issue, because there is a narrow band that is a"}, {"start": 1082.8, "end": 1089.28, "text": " valid covariance matrix. And they show with the correct parameterization, they can in fact learn"}, {"start": 1089.28, "end": 1095.84, "text": " it and get better, better performance. But this just for reference, it's not super important right"}, {"start": 1095.84, "end": 1107.44, "text": " here. The second part is more important. So this is guided diffusion. So what we can do here is we"}, {"start": 1107.44, "end": 1113.28, "text": " can build a model, let's just assume we have images and we have class labels for the images,"}, {"start": 1113.28, "end": 1122.16, "text": " let's leave away the text right now. Okay, so we have a class label for here. So this has a class"}, {"start": 1122.16, "end": 1128.16, "text": " label of cat, for example, there's also dog and so on. So what we can do is we can train the neural"}, {"start": 1128.16, "end": 1133.92, "text": " network here, you know, each step, we train it to reconstruct one step. So that's going to predict"}, {"start": 1133.92, "end": 1140.6399999999999, "text": " the noise that was added, given the image xt, given the index t. What we can also do is we can say,"}, {"start": 1140.64, "end": 1149.5200000000002, "text": " by the way, it's also, we give it the label y. So y, in this case is cat. So we can train a class"}, {"start": 1149.5200000000002, "end": 1156.8000000000002, "text": " conditional model. And that, you know, has some, some advantages, we know class conditional GANs"}, {"start": 1156.8000000000002, "end": 1163.68, "text": " work quite well. So if you give it the class label as an input, you can often improve that. And you"}, {"start": 1163.68, "end": 1171.76, "text": " would do that by either embedding the class label as a one hot vector into the network or something"}, {"start": 1171.76, "end": 1178.72, "text": " like this. Now with a text model, it's a bit more tricky, right? But what you can do is you, let's"}, {"start": 1178.72, "end": 1188.0800000000002, "text": " say this here, this here is some sort of a neural network, right? So xt goes in, this is xt goes into"}, {"start": 1188.08, "end": 1195.4399999999998, "text": " an encoder with a bunch of layers, maybe the t itself also goes in here as some sort of a float"}, {"start": 1195.4399999999998, "end": 1201.6, "text": " or an embedding a one hot vector or something like this. And the class label could also go in here,"}, {"start": 1201.6, "end": 1209.1999999999998, "text": " right? However, if you have text, what you can do is let's say you don't have this, but now you have"}, {"start": 1209.1999999999998, "end": 1215.6, "text": " a text description, they call this C. So you can first put the text description to through and it's"}, {"start": 1215.6, "end": 1222.8, "text": " own network, and then combine the embeddings. So either put the embeddings here as sort of a class"}, {"start": 1222.8, "end": 1229.1999999999998, "text": " embedding, or you can put the embeddings into each layer right here in this stack. And I think they"}, {"start": 1229.1999999999998, "end": 1240.0, "text": " do both. In any case, you can embed the text right here of the image, because their data set always"}, {"start": 1240.0, "end": 1247.36, "text": " has images and text together. So that's what I said at the beginning. So you can take this text,"}, {"start": 1247.36, "end": 1253.76, "text": " you can put it through an encoder itself, you can input it into this process right here. This is the"}, {"start": 1253.76, "end": 1262.72, "text": " network that is going to ultimately predict the added noise given an image. And yeah, the network"}, {"start": 1262.72, "end": 1269.92, "text": " can take inspiration and take can learn from the text. So if it sees this picture right here, for"}, {"start": 1269.92, "end": 1276.88, "text": " example, but in a very noisy way, and it has the text information, a couch in the corner of a room,"}, {"start": 1276.88, "end": 1282.08, "text": " it's obviously going to perform better than if it wouldn't have the text. And ultimately, that's"}, {"start": 1282.08, "end": 1287.68, "text": " going to unlock the capability that we can input a text at the very beginning, and then the model"}, {"start": 1287.68, "end": 1295.6000000000001, "text": " guided by this text will produce a living room, sorry, a couch in the corner of a room. So now,"}, {"start": 1296.3200000000002, "end": 1307.2, "text": " is this enough? And the answer is not yet. So class conditional models are working fine. However,"}, {"start": 1307.8400000000001, "end": 1314.24, "text": " it's better if you do what's called guided diffusion. So in guided diffusion, we not only"}, {"start": 1314.24, "end": 1321.6, "text": " want to make our models class conditional, but we want to guide them even more, we want to push"}, {"start": 1321.6, "end": 1327.92, "text": " them into a direction. And this is called guided diffusion. And one way to do it is to say, well,"}, {"start": 1327.92, "end": 1337.52, "text": " I have an additional classifier, I have a classifier, for example, an ImageNet classifier,"}, {"start": 1337.52, "end": 1344.16, "text": " right? And if I want to push my diffusion process towards a particular label, I can take that ImageNet"}, {"start": 1344.16, "end": 1350.56, "text": " classifier, and I can go along the gradient of that. This is very much like things like deep"}, {"start": 1350.56, "end": 1358.6399999999999, "text": " dream work, or this is essentially clip guided diffusion is this but with clip. So I have the"}, {"start": 1358.6399999999999, "end": 1363.84, "text": " clip model. And if you don't know what the clip model is, this is a model where you input an image"}, {"start": 1363.84, "end": 1374.3999999999999, "text": " and a piece of text, and it tells you how good, how good do the so let's put that a sigma is do"}, {"start": 1374.3999999999999, "end": 1381.84, "text": " these two things fit together well or not. Now, if you think about the gradient of this with respect"}, {"start": 1381.84, "end": 1391.6, "text": " to the image, then you can see that you can push the diffusion process into a direction. So this"}, {"start": 1391.6, "end": 1396.1599999999999, "text": " is one way of doing it. But it means that you have to have some sort of an external"}, {"start": 1397.04, "end": 1404.9599999999998, "text": " classifier to go by. There is also a method called classifier free guidance. And this was introduced"}, {"start": 1404.9599999999998, "end": 1410.48, "text": " by Hoagy, who is a famous author, and he's a very famous author. And he's a very famous"}, {"start": 1410.48, "end": 1418.24, "text": " author, and he's a very famous author. And he's a very famous author. And he's a very famous author."}, {"start": 1418.24, "end": 1424.32, "text": " And he gives us a classifier free guidance. And this was introduced by Hoagy and Salomon. And"}, {"start": 1424.32, "end": 1432.88, "text": " this is where you sort of use the models own knowledge about its class conditioning in order"}, {"start": 1432.88, "end": 1442.24, "text": " to do this guidance. And this is a bit weird. And I feel like, I feel like, I feel like this"}, {"start": 1442.24, "end": 1455.74, "text": " And I feel the fact that this works appears to be a little bit of just a hint that our current models aren't making use of the data fully because we have to do these tricks at inference time."}, {"start": 1455.74, "end": 1467.24, "text": " So it's more pointing towards us not really being the masters of these technologies yet, rather than this being some sort of an intrinsically good thing to do."}, {"start": 1467.24, "end": 1473.74, "text": " But essentially what we want to do is during training, we train these class conditional things, right?"}, {"start": 1473.74, "end": 1489.74, "text": " We train, let's produce the noise that was added to xt in the last step conditioned on y and y here could be a class label, y could be the input text, y could be, you know, pretty much any conditioning information."}, {"start": 1489.74, "end": 1504.24, "text": " And then we also alongside that, sometimes we don't provide that label at all. We don't just don't provide the label, which essentially means that we are training an unconditional generator."}, {"start": 1504.24, "end": 1511.24, "text": " So we just simply forget the fact that we have labels. We simply train the image generation model unconditional."}, {"start": 1511.24, "end": 1521.74, "text": " So we just give the model xt, we ask, here is just some image without description, without nothing, what was the noise added to this image?"}, {"start": 1521.74, "end": 1528.74, "text": " And now at inference, so we just train the model in both ways. During training, we sometimes just leave away the label."}, {"start": 1528.74, "end": 1536.74, "text": " This could be beneficial, as this part, in fact, would be the opportunity to bring more data into the picture, right?"}, {"start": 1536.74, "end": 1542.24, "text": " Let's say I have only part of my data is labeled and part of my data is unlabeled."}, {"start": 1542.24, "end": 1550.74, "text": " We could actually in here, bring in the unlabeled data, and therefore get more data into the system than we usually had."}, {"start": 1550.74, "end": 1557.74, "text": " But given that they probably have enough data with their giant image caption data set here."}, {"start": 1557.74, "end": 1574.24, "text": " By the way, it's the same data set they used for Dali. Given that it's probably they just leave away the text at during training for some of the, they say right here, unlabeled with a fixed probability during training."}, {"start": 1574.24, "end": 1586.24, "text": " Now during inference, you can do something with that. What you can do during inference, you can say, well, if I am in the situation where I have an image and a label,"}, {"start": 1586.24, "end": 1597.24, "text": " and I asked my model to generate the noise, what I can do is I can do a little bit like the same thing I did with the clip guiding."}, {"start": 1597.24, "end": 1609.24, "text": " So here I let my model predict the un-noised version, but I also push it into the direction that clip tells me would be a good image."}, {"start": 1609.24, "end": 1616.24, "text": " So it's two things. This is given the image, what would be the un-noisy or the less noisy version."}, {"start": 1616.24, "end": 1625.74, "text": " And this one would be, well, in general, which image would be sort of appropriate for this piece of text. It makes the two objectives."}, {"start": 1625.74, "end": 1632.74, "text": " This is very much the same. So if you unpack this, you can see that this right here,"}, {"start": 1632.74, "end": 1644.74, "text": " unconditionally asks, given this image, which is the less noisy version of the image or give me the noise that is, that was added to the image."}, {"start": 1644.74, "end": 1653.74, "text": " And then you push it into this direction right here. And you can see this is the difference between the noise that the model predicts unconditionally"}, {"start": 1653.74, "end": 1668.74, "text": " and the noise that the model predicts conditioned on the label. So this is a direction, this direction points very much into the direction of the noise that was specifically added to the label."}, {"start": 1668.74, "end": 1672.74, "text": " Right. So it's the difference between the conditional and unconditional prediction."}, {"start": 1672.74, "end": 1687.74, "text": " We add that to the predicted noise right here. So the model predicts, OK, this is the noise that was added and the conditional model predicts this one."}, {"start": 1687.74, "end": 1695.74, "text": " And then we simply push the prediction into this direction. You can see right here, there's a scalar S involved."}, {"start": 1695.74, "end": 1704.74, "text": " S obviously must be larger than one because if S is smaller, like this is what we would predict, usually the conditional one."}, {"start": 1704.74, "end": 1711.74, "text": " So now if S is larger than one, we're going to predict something more up here."}, {"start": 1711.74, "end": 1717.74, "text": " And notice the difference. If we didn't have this, if we didn't have this, we would simply predict this point right here."}, {"start": 1717.74, "end": 1723.74, "text": " We wouldn't know which one, which direction was a better direction because we also have the unconditional point right here."}, {"start": 1723.74, "end": 1731.74, "text": " We can clearly say that this direction is probably the direction that goes into the direction of the conditioning information."}, {"start": 1731.74, "end": 1740.74, "text": " So we can choose to sort of overdo it. Again, I think that is that's kind of a trick around the fact that we don't know."}, {"start": 1740.74, "end": 1748.74, "text": " We don't know how to handle the information very well quite yet. I'm not sure about it."}, {"start": 1748.74, "end": 1757.74, "text": " It seems like you wouldn't even have to do this necessarily. What you could also do if you want to go further,"}, {"start": 1757.74, "end": 1770.74, "text": " you could take sort of inspiration from the contrastive learning communities and maybe do some hard, some, you could also replace this part and this part, by the way."}, {"start": 1770.74, "end": 1780.74, "text": " So these parts you could replace sort of by an expectation of these noises over some labels, y hat or y prime."}, {"start": 1780.74, "end": 1791.74, "text": " So which means you could just sample some other text or some other conditioning information randomly and get an expectation."}, {"start": 1791.74, "end": 1799.74, "text": " You could also do hard negative sampling. So you could take labels that are fairly close or you could take labels that are kind of confusing."}, {"start": 1799.74, "end": 1809.74, "text": " And try to differentiate yourself. There's a lot of possibilities here. I can see that, but still it feels like a bit of a trick."}, {"start": 1809.74, "end": 1818.74, "text": " Yeah. So good. That's what they do. They do clip guidance. So they do this classifier free guidance, which turns out to be the better variant."}, {"start": 1818.74, "end": 1822.74, "text": " And they also do the clip guidance, which is what we discussed before, except with clip."}, {"start": 1822.74, "end": 1837.74, "text": " You can see they've just replaced the gradient of a classifier with the gradient of the clip model. The clip model is simply an inner product between an embedding of the image and embedding of the text."}, {"start": 1837.74, "end": 1847.74, "text": " And they say the reason probably that the classifier free guidance works better is because the clip sort of the diffusion models,"}, {"start": 1847.74, "end": 1858.74, "text": " what they do is they find like adversarial examples to clip and not necessarily good pictures."}, {"start": 1858.74, "end": 1874.74, "text": " Now, I don't know if the classifier free guidance would also be something that could replace sort of the current notebooks that are flying around where clip is used, clip guided diffusion and VQGAN plus clip."}, {"start": 1874.74, "end": 1890.74, "text": " But I'm not sure because the VQGAN, it seems already restricts the space of images such that it's not that easy to find adversarial examples because it always has to go through the vector quantization."}, {"start": 1890.74, "end": 1898.74, "text": " OK, that's the model. Like the model is nothing else. It's a diffusion model. All right. This has existed before."}, {"start": 1898.74, "end": 1908.74, "text": " It is conditioned on conditioning information. The diffusion model itself is conditioned in this case on text that goes through a transformer encoder, which is the blue thing right here."}, {"start": 1908.74, "end": 1914.74, "text": " This embeddings are then sort of concatenated into the process of this diffusion model."}, {"start": 1914.74, "end": 1922.74, "text": " The diffusion model is a model that for one of these steps predicts sort of tries to predict the reverse."}, {"start": 1922.74, "end": 1930.74, "text": " It's the same model for each step. It just gets as an additional conditioning information which step it's currently trying to reconstruct."}, {"start": 1930.74, "end": 1935.74, "text": " It always reconstructs the noise that was added. Training data generation is pretty easy."}, {"start": 1935.74, "end": 1942.74, "text": " You simply add noise to an image and then you add a bit more. And then the difference between that is the target to predict."}, {"start": 1942.74, "end": 1948.74, "text": " Then at inference time, at inference time, they also do this guided diffusion."}, {"start": 1948.74, "end": 1958.74, "text": " That's either going to be achieved by CLIP and the disadvantage of that is that you have to have an additional classifier like CLIP."}, {"start": 1958.74, "end": 1967.74, "text": " Not only that, but in fact, the classifier has also have to been trained on noisy images because otherwise noisy images are going to be out of its distribution."}, {"start": 1967.74, "end": 1972.74, "text": " So they do in fact train noised CLIP versions."}, {"start": 1972.74, "end": 1976.74, "text": " The disadvantage, as I said, is you need these additional model that's trained on noisy data."}, {"start": 1976.74, "end": 1980.74, "text": " The advantage is that you get to bring additional information here."}, {"start": 1980.74, "end": 1987.74, "text": " You get to essentially potentially even bring additional data sets that was used to train these other classifiers."}, {"start": 1987.74, "end": 1991.74, "text": " You can use multiple classifiers, whatever."}, {"start": 1991.74, "end": 1996.74, "text": " They also do classifier free guidance. These two things, they don't use them together."}, {"start": 1996.74, "end": 2000.74, "text": " CLIP guidance and classifier free. They use them either or."}, {"start": 2000.74, "end": 2010.74, "text": " The classifier free guidance is more like a hack where you alongside the conditional denoising train an unconditional denoising."}, {"start": 2010.74, "end": 2024.74, "text": " So you train the model also to sometimes not be conditioned and then you push it into the direction away from the unconditioned towards the conditioned and beyond to make it extra conditioned, I guess."}, {"start": 2024.74, "end": 2027.74, "text": " The disadvantage here is that seems like a hack."}, {"start": 2027.74, "end": 2037.74, "text": " The advantage is that there's potential maybe to do some hard negative sampling and also it doesn't require an extra model on the side."}, {"start": 2037.74, "end": 2045.74, "text": " And also in the unconditional training, you might bring in additional data that has no label."}, {"start": 2045.74, "end": 2048.74, "text": " So training happens."}, {"start": 2048.74, "end": 2055.74, "text": " It's a 3.5 billion parameter text conditional diffusion model at 64 by 64 resolution."}, {"start": 2055.74, "end": 2060.74, "text": " This is way smaller than DALI, by the way. And this is cool."}, {"start": 2060.74, "end": 2066.74, "text": " And a 1.5 billion parameter text conditional up sampling diffusion model to increase the resolution."}, {"start": 2066.74, "end": 2073.74, "text": " So it's a two stage process. The diffusion model itself is at a 64 by 64 resolution."}, {"start": 2073.74, "end": 2080.74, "text": " And then they have an up sampling models. It's also text conditional, but it is."}, {"start": 2080.74, "end": 2089.74, "text": " So this is purely an diffusion up sampling model. It's very much the same principle, except that it now doesn't go."}, {"start": 2089.74, "end": 2099.74, "text": " It doesn't go from noisy image or sorry, from from pure noise to image. It goes from low resolution image to high resolution image."}, {"start": 2099.74, "end": 2110.74, "text": " And alongside of that, they train a noised clip model, which is the classifier that they're going to need to do guidance."}, {"start": 2110.74, "end": 2118.74, "text": " Well, they describe here a little bit of the architectures. We're not super interested. At least I'm not super interested in the architectures."}, {"start": 2118.74, "end": 2123.74, "text": " They're way big models. As I said, they release the small models. They don't release the big models."}, {"start": 2123.74, "end": 2129.74, "text": " And they explicitly train for inpainting, even though you could do it with diffusion models without training."}, {"start": 2129.74, "end": 2134.74, "text": " But they say if you train it, it behaves a bit better."}, {"start": 2134.74, "end": 2143.74, "text": " So during training, they would sort of mask out random parts of the images and then use diffusion to reconstruct those."}, {"start": 2143.74, "end": 2148.74, "text": " And yeah, the results are the results that we've already seen. These are pretty interesting."}, {"start": 2148.74, "end": 2163.74, "text": " They do studies with it. So they do studies on these data sets. So as they increase the guidance scales, they the guidance scales are like the only thing, the only handle they have at inference time."}, {"start": 2163.74, "end": 2171.74, "text": " That to trade off, to trade off diversity and sort of adherence to the data set."}, {"start": 2171.74, "end": 2180.74, "text": " And it turns out that the classifier-free guidance, as you can see right here, is behaving better. This is the frontier right here."}, {"start": 2180.74, "end": 2190.74, "text": " These always trade off two different metrics in the MS COCO data set here. Precision recall, here inception score and FID."}, {"start": 2190.74, "end": 2198.74, "text": " And you can see the only time the clip guidance is better than classifier-free guidance is when you directly look at the clip score."}, {"start": 2198.74, "end": 2204.74, "text": " That's why they say probably the clip guidance simply finds adversarial examples towards clip."}, {"start": 2204.74, "end": 2211.74, "text": " They also let humans rate the pictures in terms of photorealism and caption similarity."}, {"start": 2211.74, "end": 2218.74, "text": " And you can see that the classifier-free guidance wins both times. And that's pretty much it."}, {"start": 2218.74, "end": 2222.74, "text": " They show some failure cases, which I also find pretty interesting."}, {"start": 2222.74, "end": 2232.74, "text": " So an illustration of a cat that has eight legs is not a thing. Bicycle that has continuous tracks instead of wheels."}, {"start": 2232.74, "end": 2243.74, "text": " It seemed like it seemed a bit like Dali as a model was more sort of sensitive or was more respondent to text itself."}, {"start": 2243.74, "end": 2252.74, "text": " So to the prompt, whereas here it seems it's more like generating realistic images that has some sort of the words."}, {"start": 2252.74, "end": 2257.74, "text": " So the words kind of match with the text. A mouse hunting a lion, not happening."}, {"start": 2257.74, "end": 2265.74, "text": " Also a car with triangular wheels, also not happening. As you can see, I myself have tried the small model a little bit."}, {"start": 2265.74, "end": 2275.74, "text": " And you can see, you can try it yourself. I'll put a link up. There is a Gradio space by the user Valhalla."}, {"start": 2275.74, "end": 2282.74, "text": " Thanks a lot for creating that. So here is balloon race. You can see that works pretty well."}, {"start": 2282.74, "end": 2288.74, "text": " A drawing of a tiny house. That's also OK. A hidden treasure on a tropical island."}, {"start": 2288.74, "end": 2296.74, "text": " And I mean, it's a tropical island, right? But yeah, all the elephants had left a long time ago."}, {"start": 2296.74, "end": 2305.74, "text": " Now only a few vultures remain and it's just kind of a bunch of elephants. So, well, the elephants are kind of walking away a little bit. Right."}, {"start": 2305.74, "end": 2314.74, "text": " Yeah. Attention is all you need. Obviously, oddly Russian, Russian vibes from this picture."}, {"start": 2314.74, "end": 2324.74, "text": " And this one is glory to the party. And I guess party is just sort of equated with birthday cake or so."}, {"start": 2324.74, "end": 2337.74, "text": " So the sort of text sensitivity of this model might not be as good, but there might be opportunity to fiddle here."}, {"start": 2337.74, "end": 2349.74, "text": " The samples as such, they look they look pretty, pretty cool. It's also not clear how much of a difference this is between the small model and the large model or how much effort into diffusion is put."}, {"start": 2349.74, "end": 2358.74, "text": " They also say they they they release the model they release is sort of a model on a filtered version of a data set."}, {"start": 2358.74, "end": 2367.74, "text": " And the filtered version removes, for example, removes hate symbols and anything to do with people."}, {"start": 2367.74, "end": 2378.74, "text": " So they say it's not as easy to generate deepfakes. Yeah. And where was."}, {"start": 2378.74, "end": 2383.74, "text": " Yeah, I think the the coolest one is where you can do this interactively. That is that is a pretty cool one."}, {"start": 2383.74, "end": 2396.74, "text": " I want to look at lastly, where sorry for the scrolling around safety consideration. So there's so like they say,"}, {"start": 2396.74, "end": 2409.74, "text": " as a result, releasing our model without safeguards would significantly reduce skills required to create convincing disinformation or deepfakes."}, {"start": 2409.74, "end": 2420.74, "text": " And they say they only release the small model. They say this somewhere."}, {"start": 2420.74, "end": 2422.74, "text": " Where is it?"}, {"start": 2422.74, "end": 2444.74, "text": " Well, in any case, they only release a small model. But I just want everyone to remember GPT-2. And it was exactly the same. And to my knowledge, there's there's not the world is not in chaos right now because people have used GPT-2, which is sort of public by now and can be easily trained by anyone."}, {"start": 2444.74, "end": 2461.74, "text": " The world is not in chaos, because people have access to GPT-2. It's, it's not the case. And I don't know why they do it because for PR reasons, or because they want to kind of sell it, sell the larger model, sell access to it."}, {"start": 2461.74, "end": 2479.74, "text": " I mean, that's all fine. But don't tell me this is safety considerations. And yeah, the fact is, people are going to create deepfakes in the future, it's going to be easier. But it's kind of we have to, the answer is not to not release the models and techniques."}, {"start": 2479.74, "end": 2496.74, "text": " The answer is to educate people that, hey, look, not everything you see on a on a picture, especially if it looks like it's up sampled from two from 64 by 64. Not everything you see on there might be entirely real, right?"}, {"start": 2496.74, "end": 2513.74, "text": " Things can be altered, things can be photoshopped, things can be created like this. It's the same as people have learned that not everything that's written in an email is true. And people will simply have to adapt, that's going to be the only way."}, {"start": 2513.74, "end": 2530.74, "text": " Not giving people access to these things seems to be kind of futile. But as I said, I don't believe for a second that actual safety considerations were the reason for this. In any case, let me know what you think. And that was it for me."}, {"start": 2530.74, "end": 2547.74, "text": " Try out the model and maybe you'll find something cool. Bye bye."}]
Generative Models
https://www.youtube.com/watch?v=qS-iYnp00uc
Parti - Scaling Autoregressive Models for Content-Rich Text-to-Image Generation (Paper Explained)
#parti #ai #aiart Parti is a new autoregressive text-to-image model that shows just how much scale can achieve. This model's outputs are crips, accurate, realistic, and can combine arbitrary styles, concepts, and fulfil even challenging requests. OUTLINE: 0:00 - Introduction 2:40 - Example Outputs 6:00 - Model Architecture 17:15 - Datasets (incl. PartiPrompts) 21:45 - Experimental Results 27:00 - Picking a cherry tree 29:30 - Failure cases 33:20 - Final comments Website: https://parti.research.google/ Paper: https://arxiv.org/abs/2206.10789 Github: https://github.com/google-research/parti Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
Not a day goes by in AI research in which we don't get a new image generation model these days. So take a look at the top row right here and listen to the prompt that generated them. Oil on canvas painting of a blue night sky with roiling energy. A fuzzy and bright yellow crescent moon shining at the top. Below the exploding yellow stars and radiating swirls of blue, a distant village sits quietly on the right. Connecting earth and sky is a flame like cypress tree with curling and swaying branches on the left. A church spire rises as a beacon over rolling blue hills. That is a 67 word description of Starry Night by Vincent van Gogh. And it is also the prompt that generated the top row of images. And the paper does this to show that image generation models, specifically this one, they have become super duper capable of incorporating not only wild concepts, as you can see here, co-locating the Eiffel Tower with the Sydney skyline and fireworks and whatnot, but also, you know, minute details about things in the image and where things are and how things look. So we've gone from essentially conditional GANs where we could create one of 10 classes to something where we can input like a little essay about what we want to see and get it out. So this is by a group of researchers out of Google Research, and they are a parallel work to the Imagen model that you might have seen. So this model or the paper is called Scaling Autoregressive Models for Content Rich Text to Image Generation. But the model is called, let me grab, if I can, let me grab pen. The model is called P-A-R-T-I. And I have no clue how to pronounce this. This could be party, maybe the pronunciation is on the art or on the part because it's pathways like it's, or part-tie or I have no idea. Let's call it party. And party is a model that generates images from text as we have so many models. However, it doesn't do this in the same style as Imagen, which is a diffusion model. It is an autoregressive model. So here you can see a bunch of other outputs like this. This is insane. Look at the left side right here. A photo of a frog reading the newspaper named Toaday. The newspaper is named Toaday. Like, how crazy is that? That in itself is pretty funny. But we know that these image to, sorry, these text to image models are pretty bad at spelling stuff in images. Well, not this model, as you can see right here. It gets it completely right. It doesn't always get it right, but it gets it right often enough. Or this one, portrait of a statue of the Egyptian god Anubis wearing aviator goggles. Like another connoisseur of fine eyewear, I see. White T-shirt and the leather jacket. The city of Los Angeles is in the background. High res DSLR photograph. That's literally that's the academic version of the Unreal Engine trick right here. And you can see the images spot on. So this requires a lot of knowledge, not only of, you know, what a DSLR photograph is, but also how the skyline of Los Angeles looks, how the Egyptian god Anubis looks, right? And the composition of things together. Like this god was never in a leather jacket depicted. I guess maybe on the internet you'll find anything. But you can see a bunch of more examples right here. I specifically love the thing on the left side here. You can see that they generated images. So the prompt is three quarters front view of a XYZ coming around a curve in a mountain road looking over a green valley on a cloudy day. So X here is any of the colors blue, red and yellow. Y is any of the numbers. 1977, 1997 and 2017. And Z is any of these car types. And now look that the model can essentially track the the historical evolution of these cars. So not only does it know what a Porsche is, it also knows how a Porsche in 77 looked like. Maybe it's not exactly the correct year, but this is pretty crazy. You can see a bunch more examples right here. They do a lot of examples with animals. I specifically like the raccoon here in the style of Cubism. So this is going to be very, very powerful technology. We can immediately see that, you know, the quality of these models gets fast, gets quickly, sorry, gets well, gets better so quickly that in the foreseeable future we're going to have super powerful tools to just create and edit images from text. Look at the left side here, a giant cobra snake made from salad. You know, I'm sure they even say these are cherry picked, but still this is insane. Now, I would love to tell you that behind all of this cool development is a really cool idea, like is a smart architecture and something like this. But I'm afraid it is not. It is simply scale and not simply scale. I mean, you have to have the sort of correct base architecture. There's nothing like particularly there's no cool invention in architecture or a neat trick involved or anything like this. It's really just plug basic things together, make them really big, train them for long on a lot of data and you'll get quality. So this is the model overview right here, the overview of this party or part time model. This is, as I already said, in contrast to Imagen, it is an autoregressive model, so not a diffusion model. What happens is that on this side here, you have this VQGAN image encoder and decoder. Well, they don't call them encoder and decoder. They call them tokenizer and de tokenizer. So if you are not aware, autoregressive models, they work on tokens. Now, tokens in usually in natural language processing are words or part of words. So these would be tokens, token one, token two and so on until token N. And then what you would try to do is you would try always to predict the next token. That's what makes it autoregressive. You feed in parts of a token sequence, like parts of a sentence. You try to predict the next one. That's exactly what you see right here in the architecture. So you pass in the start of sentence token. You try to predict the first token and you pass in the first token. And then from these two, you try to predict the second token and then put that here from these three. You try to predict the third token and so on. That's the autoregressivity. In text, that works well. However, in images, it's not quite obvious how to do that. That's why you first need to get from the image space to the token space. So we need a way for any given image that we get out a sequence of tokens. And it can't be the pixels themselves. We would like to have tokens that are kind of latent and have sort of a bit of meaning, not just individual pixels, because that, first of all, is too many pixels. And second of all, there's not too much, let's say, information in the single pixel. So what we do is we have these image tokenizer and de-tokenizer. This is a VQGAN that is powered by a vision transformer. So essentially, this is a model that takes this image, it ships it through a bunch of layers. And at the end, so let's say the image at the beginning has a bunch of rows, a bunch of columns with its pixels. This goes through a series of maybe downscalings and so on. No, actually, it's because it's a vision transformer, it probably even tokenizes like it patches the image at the very beginning. So these would be image patches. Then these are transformed by a transformer to a latent space. Maybe they are compressed. And then you get tokens. So at the end, you can take these things right here or the things that correspond to them in the latent representation. You can take those as image tokens and you can unroll essentially this image and then feed it into this model. Hey, just a short interjection here from Janek from the future. The idea, I forgot, the idea behind the whole setup here is behind the whole VQGAN is obviously that these things here, are tokens, which means that they come from a set vocabulary. So the way you train a VQGAN isn't just to give you this latent representation of like token like things, but then you also quantize them. So there is also a vocabulary somewhere where you have a set defined set of tokens. I believe in their case, they have like eight eight thousand tokens or so. And your image tokens must be of these eight thousand. So the image has a bunch of tokens, but they all must be one of the things in the vocabulary here. Now, the vocabulary is also learned. There are some techniques by which to learn the vocabulary. But this quantization is actually what then enables you to treat essentially to treat it as a sequence of language tokens, which also come from a vocabulary. All right. Back to Janek in the past. The image tokenizer is trained as an as it says here as a VQGAN, which means that you encode and then you decode again and you try to get out the same image. And at the end, this representation here in the middle is really valuable because it's a tokenized representation of an image. So you put that into the transformer right here. And this is, as we said, an autoregressive model. So it gets as an input, obviously the sequence so far. It tries to predict the next image token, but also gets as an input the text. So this is the prompt that the user puts in. So the prompt is encoded in a transformer encoder and is then fed in as a side input as a target for attention. So whenever in the layer here you have queries, keys and values, I'm going to guess the query can also look at the transformer encoder. The query can also look at the keys right here. So over here, you'd only have keys and values. If you don't know what all of this means, I have a video on attention is all you need, where you can learn how attention mechanisms work. So essentially, the way this is trained is the following. You attach a sentence here or a description of an image and you attach an image right here. The image is then patched. It is fed through the VQGAN encoder. Its latent representation is obtained. That latent representation is put here. And then you essentially train a decoder language model that has cross attention into the text representation of the prompt. So you simply train this thing right here like you would train a GPT model or any other model. And this thing right here is trained, as I said, as an image reconstruction model. And this thing right here is trained, I guess, jointly with this. Actually, don't know. This could this could not be true, but I think it is true. I think it is trained jointly. So that's the model, as I said, is very basic. I wish I could tell you something more interesting right here, but I can't. It's a standard, you know, bunch of transformers in sequence. Essentially, every single component right here is a transformer. And because every single thing is a transformer, you can scale this thing by a lot. By the way, here you can see a bunch of the I'm not going to go into the architectural details quite quite as much. But they do also train an upsampler. So they have images of resolution 256 by 256. Ultimately, they do train an upsampler as well, where so here this is the upsampler super resolution upsampler, where they can go from their pipeline, which does 256 by 256 to a 1024 by 1024 picture, essentially. But this is just upsampling. Right. So there is, I mean, technically no extra information right here. This doesn't get to look at the prompt or anything like this. It simply gets to look at this image and then make a four times larger image out of that. So where did we leave off? Oh, yeah. I also wanted to say if you now want to get an image out of this thing. So not training, but inference. What you do is you attach only the prompt right here. Right. You encode the prompt. You put the start of sentence token right here. You let the model generate one. Then you put that here, two. Then you put that here, three and so on. You let the model generate the image tokens here. You take those image tokens. You feed, you arrange it into the latent representation of the VQGAN and you use the decoder right here in order to generate the final image. So that's the whole flow. And then you put it through the super resolution if you want that. Here you can see the basics, the basic architectural layouts. So there is the smallest model has 350 million parameter. You can see it has 12 encoder and 12 decoder layer. It's pretty standard transformer scaling laws right here. I mean, scaling laws, pretty standard transformer architectural laws. They go through a 750 million parameter model, three billion. And the last one here has 20 billion parameters. So that's a decently sized model. It's not as large as the large language models. And they do use things like sparse conv attention and things like this. But it is, you know, it's pretty large, I would say. You could not run that at home very easily. So where does that get us? They have a big description right here, how they solve this architecturally, how they chart the model, how they use parallelism, which is very interesting. I'm just not an expert at it. So if you're interested, I'll leave you to read this part. I found the at least the drawings here pretty cool. So apparently this the signal is routed like, you know, like so, like so and so. So like in like a snake type of arrangement so that always you can pipeline so that always one thing is essentially busy as you send data to the next thing and so on. But as I said, I'm not the expert in this and I'd rather want to get to the other things, which are the data sets that they use. So they have three data sets, three main data sets right here. One is MS Coco. Now MS Coco, as they show right here for the image on the right hand side, it simply says a bowl of broccoli and apples with a utensil. So it just kind of is a high level description of what's in the image, like an image, simple image caption right for this image right here. Whereas the localized narratives data set, you can see that its description is way longer. It's more linguistically prosaic, but it is also much more descriptive of the actual image. Like so the top is if you want to tell someone what's in an image and the bottom is more like if you want to like really paint the picture, like pun intended, or if you want to describe the picture to someone so that they could maybe recreate it in some way. And it turns out that we are now at the point with these image generation models where they are so good that we need data sets like the bottom one to really push them to their limits. And not only that, but the authors here find that there are even problems with that because these image data sets, they're always created in a way that an image is given and then the humans are asked to write a description, which is really good because then you have image and description together. However, the authors here know that this prevents, for example, fantasy pictures like we saw before the raccoon in cubism that it doesn't exist. So it can't be in any data set or a noob in a leather jacket doesn't exist. So it can't be in any data set. So while we rely on generalization during training for the model to learn these things, we actually need data sets like that to evaluate whether they can really do these things, right? Otherwise we're left with sort of subjective evaluation. So they come up with their own data set, which is called party prompts. And that's actually also the thing they release as far as I understand it. Obviously, as all of the recent works in big models, this thing isn't released. There's no code. There's no, I mean, the code would be trivial. There's no weights. There's no training recipe. There's no, some of the data sets are proprietary if I understand correctly. So the paper is more open about what they do, but still that there is no way of accessing this. So party prompts. This is a data set that essentially only consists of prompts. So there is no images in this data set. And I believe the only way you can really assess thing is you can let the model generate stuff and then you can let humans rate it. That's essentially it. The party prompts, it is pretty interesting because they create these prompts by letting the prompt engineers sort of, they choose, for example, a challenge. So the challenge might be perspective, right, which could be, you know, I need a prompt that asks for some object in some specific perspective that is unusual or quantity. Like I need a prompt that asks for a given number of things because we know that these models, they're not super good at counting. Right. I mean, we also thought the models aren't super good at spelling. And now it turns out, well, if we just make them bigger, they are. So, you know, I'm fairly confident they're going to be good at counting in a short while. That's the challenge. There's also, if I recall correctly, this is this upper table right here, like categories. So there are categories, animals, there are categories, illustrations and so on. So you can see this is a diverse set of category challenge combinations and they make a bunch of prompts for each one. I think they have about 1600 prompts in total in this party prompt eval set, which is a pretty neat thing to have, even if it comes without images. So now they train the thing with their whole architectural shabangs with the parallelism and the pipelining and the yada, yada, yada on TPU v4, I think. So this is a huge operation. So what does that give us? I want to just jump the evals here on the metrics because yes, yes, yes, they're very good, very good. They're also very good as rated by humans, humans very good, which is what's interesting is they have, for example, a retrieval baseline, which simply retrieves images from the training data set. And even if the obviously image text match, the party model wins because you can actually create an image and not retrieve one. But even in image realism, you can see the retrieval is only slightly higher in realism, right? Every single image is real that the retrieval retrieves. And still the humans rate the realism of party almost the same, which is quite speaking for the model. The loss curves are also pretty interesting, especially interesting that the 20 billion model here, it takes quite a time to come down here, right? It kind of has to get surpassed by the three billion model initially and then overtakes it, which maybe means that we haven't exactly found the right training recipes yet for these largest of models. So this now is the cool part where they put the model, the models next to one another. So this is the same prompt with all of these different models. And you can just see where scale gets you. This is a portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House, holding a sign on the chest that says Welcome Friends. And you can see my this, these these things right here, this and this, there may be like Dolly Mini kind of style pictures. And there are also that scale, right? And then we go to the three B model. And this is something that would be familiar maybe from something like Dolly or Dolly, maybe between Dolly and Dolly too, right? These things you can see they're bad at spelling, but as soon as you go bigger, all of a sudden, Welcome Friends, bada boom, there it is. Not bad at spelling anymore. All you need to scale. That's crazy. The sign, very deep learning. Look, as the model learns to spell, initially it can only do Russian or whatever and and just eventually it would actually be funny if that was like actual Russian and it said very deep learning. Can you imagine how crazy that would be? Well, in any case, and also the Grand Canyon, right? So there's kind of structure here and so on, but this very, very deep learning. Perfect. A blue Porsche parked in front of a yellow brick wall. You can see it doesn't always work. But it works better and better and better with scale. Crazy. And here this is like maybe like is this a direct shot at Gary Marcus because the challenge is like an astronaut riding a horse. So astronaut riding a horse in the forest, even the three billion model. Oh, no, it's going to be a horse riding an astronaut, which is going to come up later. And I promise it's going to be funny. But yeah, an astronaut riding a horse in the water in front of them, water lilies and so on. A map of the United States made out of sushi. So as you can see, these these results are fairly insane. Infinity, the back of a violin, four cats surrounding a dog. So now they're really testing these individual categories. Infinity is an abstract concept. Back of violin is perspective. Four cats surrounding a dog is this quantity metric. You can you can see there are four cats. Right. So, yeah, I'm pretty confident that with with scale, these types of problems are going to be solved. Scroll gives an apple to a bird. Yeah, so. What's interesting is they have this narrative of what they call growing a cherry tree. So obviously these samples here are cherry picked, which means that they take out whatever they think are good samples to present in the paper. However, they detail fairly extensively how they arrive at this thing. So what they do is they don't just come up with these long prompts by themselves. Well, these aren't long. OK, but, you know, these long prompts with Anubis in front in a leather jacket in front of Los Angeles skyline, they don't just come up with them on the spot. They have a process of coming up with them. And the process is detailed here. So, for example, they have this idea of combining like a sloth with a van. Right. So they start by just exploring the model and entering things like a smiling sloth, like what comes out. Right. And a van parked on grass. There are always good images and bad images that turn out and they sort of learn how to have to tweak the prompt to get what they want. Once they're happy, they go on. So they modify the prompt a bit. So there is the smiling sloth wearing a leather jacket, a cowboy hat and a kilt or wearing a bow tie and holding a quarter staff. So they kind of explore. They go more and more, as you can see, as you go down this tree, this cherry tree, as they call it, they go down and down. They detail. Well, sometimes there's problems. This one, I believe, has two two arms on this side and so on. So but still they refine and refine and refine. They finally try to combine them. Right. Yeah, here is here is a combination. They refine again. They try to combine the two prompts again. And at the end, they get to something that they might be happy with, for example, the thing here on the left, like this one right here. But I found this pretty interesting, like this process of arriving at these things. So you can't just enter any old long sentence and expect the model to do well. But what turns what might what will work often better, at least as they describe it, is to go through this process right here, which also means that full artistic freedom is a bit away. So it is almost like, yes, you are guiding the model with your inputs, but also the model is kind of guiding you by what it does well and what it doesn't do well if you go via this process. And if you don't go via this process, then I guess you can expect that you you can expect that it might not work as well. So they also have some failure cases, which is pretty cool. For example, the failure cases like color bleeding, where you describe the color of one of the things in the image and sort of the other take on that that color. There's also counting failures and so on, localization failures. For example, here the prompt is, the prompt is. Oh, yeah, the Great Pyramid of Giza situated in front of Mount Everest. That's the bottom two pictures should be that. You can see this, OK, I mean, this isn't this isn't too bad, but this here is just like the pyramid with sort of a Mount Everest cover, right? You can see these models, they sometimes if they can't fulfill the problem directly, they'll kind of mix, they'll just try to get it done somehow and get it really close in text embedding space. That's exactly what you can see right here. There's a bunch of examples. And this one, I told you, it's the horse riding on an astronaut. So they have to actually specify the horse is sitting on an astronaut because the riding is just is just riding indicates too much that the horse is on the bottom. But I just found the horse riding on the astronaut to be absolutely hilarious, especially this one. Yeah, but all in all, I guess what I wanted to say is that this is complaining on a on a very, very high level, right? The paper itself is like moving the goal posts already by sort of criticizing itself for, oh, well, I specified like nine apples in a perfect arrangement. I don't have or right at ten red apples and it's only eight red apples. Like what? What a loser model. Look at that. I mean, this is it is crazy good how these models are and the failure cases here are, you know, yes, they're failure cases. But I don't think that if you told me three, four years ago that this is the type of error that we're at solving that I would have said, yeah, I believe that. I would have way guessed we're still at the point where, you know, we we have mode collapses, we can't create most of the text stuff, we have artifacts and all kinds of things. And I think this is yeah, it's it's kind of mind blowing how fast the progress here is obviously half a year ago or so. Yeah, I would have expected something like this, but I believe, yeah, a lot of people must be very surprised and including me. Yeah, like spelling mistakes, like complaining that, you know, sometimes text is still not spelled right. No, even though, right, Dali couldn't do it at all. And now this thing is doing it almost perfectly, as you can see right here, combining abstract concepts. Look at the thing on top. It's it's insane. Or here like, oh, this leg is in behind the race car. Come on. This is better than I guess anyone had expected. So, yeah, I don't want to waste your time too much more. I just thought this was absolutely cool. And I'm very excited to see where this is going next. Of course, huge bummer that we don't get access to this. I hope this finds its way into some products that we can use. As you know, I'm all for these companies making making money with their inventions. I mean, I think it's cool that they are inventing and, you know, if they want to make some cash off of it, you know, good for them. But I do hope that we actually get to use it. And I it's going to be a fun future where for every presentation or anything, if you need like an illustration, you just you just type it right. You don't go to the Internet to search an appropriate stock photo. You just type it. It's so cool. Or you want to change something in a picture. You just erase it. You just say, well, ever here, change that part to something else. So cool. No Photoshop skills anymore. No drawing skills anymore. Just you and your mind and your creativity. All right. That was it. As I said, the paper presented in this new system is fairly simple. All it does is scale a bunch of transformers in sequence. Essentially, I presented a evaluation benchmark, these party prompts, and it presented. Yeah, their their model, which is ridiculously insane. That was it for me. Let me know what you think. And I'll see you around. Bye bye.
[{"start": 0.0, "end": 7.0, "text": " Not a day goes by in AI research in which we don't get a new image generation model these days."}, {"start": 7.0, "end": 13.0, "text": " So take a look at the top row right here and listen to the prompt that generated them."}, {"start": 13.0, "end": 18.0, "text": " Oil on canvas painting of a blue night sky with roiling energy."}, {"start": 18.0, "end": 22.0, "text": " A fuzzy and bright yellow crescent moon shining at the top."}, {"start": 22.0, "end": 29.0, "text": " Below the exploding yellow stars and radiating swirls of blue, a distant village sits quietly on the right."}, {"start": 29.0, "end": 37.0, "text": " Connecting earth and sky is a flame like cypress tree with curling and swaying branches on the left."}, {"start": 37.0, "end": 42.0, "text": " A church spire rises as a beacon over rolling blue hills."}, {"start": 42.0, "end": 48.0, "text": " That is a 67 word description of Starry Night by Vincent van Gogh."}, {"start": 48.0, "end": 52.0, "text": " And it is also the prompt that generated the top row of images."}, {"start": 52.0, "end": 59.0, "text": " And the paper does this to show that image generation models, specifically this one,"}, {"start": 59.0, "end": 67.0, "text": " they have become super duper capable of incorporating not only wild concepts, as you can see here,"}, {"start": 67.0, "end": 73.0, "text": " co-locating the Eiffel Tower with the Sydney skyline and fireworks and whatnot,"}, {"start": 73.0, "end": 80.0, "text": " but also, you know, minute details about things in the image and where things are and how things look."}, {"start": 80.0, "end": 88.0, "text": " So we've gone from essentially conditional GANs where we could create one of 10 classes"}, {"start": 88.0, "end": 94.0, "text": " to something where we can input like a little essay about what we want to see and get it out."}, {"start": 94.0, "end": 100.0, "text": " So this is by a group of researchers out of Google Research,"}, {"start": 100.0, "end": 107.0, "text": " and they are a parallel work to the Imagen model that you might have seen."}, {"start": 107.0, "end": 114.0, "text": " So this model or the paper is called Scaling Autoregressive Models for Content Rich Text to Image Generation."}, {"start": 114.0, "end": 121.0, "text": " But the model is called, let me grab, if I can, let me grab pen."}, {"start": 121.0, "end": 129.0, "text": " The model is called P-A-R-T-I. And I have no clue how to pronounce this."}, {"start": 129.0, "end": 141.0, "text": " This could be party, maybe the pronunciation is on the art or on the part because it's pathways like it's,"}, {"start": 141.0, "end": 147.0, "text": " or part-tie or I have no idea. Let's call it party."}, {"start": 147.0, "end": 153.0, "text": " And party is a model that generates images from text as we have so many models."}, {"start": 153.0, "end": 160.0, "text": " However, it doesn't do this in the same style as Imagen, which is a diffusion model."}, {"start": 160.0, "end": 166.0, "text": " It is an autoregressive model. So here you can see a bunch of other outputs like this."}, {"start": 166.0, "end": 175.0, "text": " This is insane. Look at the left side right here. A photo of a frog reading the newspaper named Toaday."}, {"start": 175.0, "end": 180.0, "text": " The newspaper is named Toaday. Like, how crazy is that?"}, {"start": 180.0, "end": 186.0, "text": " That in itself is pretty funny. But we know that these image to, sorry,"}, {"start": 186.0, "end": 190.0, "text": " these text to image models are pretty bad at spelling stuff in images."}, {"start": 190.0, "end": 195.0, "text": " Well, not this model, as you can see right here. It gets it completely right."}, {"start": 195.0, "end": 199.0, "text": " It doesn't always get it right, but it gets it right often enough."}, {"start": 199.0, "end": 206.0, "text": " Or this one, portrait of a statue of the Egyptian god Anubis wearing aviator goggles."}, {"start": 206.0, "end": 214.0, "text": " Like another connoisseur of fine eyewear, I see. White T-shirt and the leather jacket."}, {"start": 214.0, "end": 220.0, "text": " The city of Los Angeles is in the background. High res DSLR photograph."}, {"start": 220.0, "end": 224.0, "text": " That's literally that's the academic version of the Unreal Engine trick right here."}, {"start": 224.0, "end": 232.0, "text": " And you can see the images spot on. So this requires a lot of knowledge, not only of, you know,"}, {"start": 232.0, "end": 237.0, "text": " what a DSLR photograph is, but also how the skyline of Los Angeles looks,"}, {"start": 237.0, "end": 243.0, "text": " how the Egyptian god Anubis looks, right? And the composition of things together."}, {"start": 243.0, "end": 251.0, "text": " Like this god was never in a leather jacket depicted. I guess maybe on the internet you'll find anything."}, {"start": 251.0, "end": 258.0, "text": " But you can see a bunch of more examples right here. I specifically love the thing on the left side here."}, {"start": 258.0, "end": 268.0, "text": " You can see that they generated images. So the prompt is three quarters front view of a XYZ"}, {"start": 268.0, "end": 273.0, "text": " coming around a curve in a mountain road looking over a green valley on a cloudy day."}, {"start": 273.0, "end": 280.0, "text": " So X here is any of the colors blue, red and yellow. Y is any of the numbers."}, {"start": 280.0, "end": 293.0, "text": " 1977, 1997 and 2017. And Z is any of these car types. And now look that the model can essentially track the"}, {"start": 293.0, "end": 299.0, "text": " the historical evolution of these cars. So not only does it know what a Porsche is,"}, {"start": 299.0, "end": 309.0, "text": " it also knows how a Porsche in 77 looked like. Maybe it's not exactly the correct year, but this is pretty crazy."}, {"start": 309.0, "end": 313.0, "text": " You can see a bunch more examples right here. They do a lot of examples with animals."}, {"start": 313.0, "end": 319.0, "text": " I specifically like the raccoon here in the style of Cubism."}, {"start": 319.0, "end": 327.0, "text": " So this is going to be very, very powerful technology. We can immediately see that, you know,"}, {"start": 327.0, "end": 338.0, "text": " the quality of these models gets fast, gets quickly, sorry, gets well, gets better so quickly that in the foreseeable future"}, {"start": 338.0, "end": 343.0, "text": " we're going to have super powerful tools to just create and edit images from text."}, {"start": 343.0, "end": 348.0, "text": " Look at the left side here, a giant cobra snake made from salad."}, {"start": 348.0, "end": 356.0, "text": " You know, I'm sure they even say these are cherry picked, but still this is insane."}, {"start": 356.0, "end": 367.0, "text": " Now, I would love to tell you that behind all of this cool development is a really cool idea, like is a smart architecture and something like this."}, {"start": 367.0, "end": 373.0, "text": " But I'm afraid it is not. It is simply scale and not simply scale."}, {"start": 373.0, "end": 377.0, "text": " I mean, you have to have the sort of correct base architecture."}, {"start": 377.0, "end": 386.0, "text": " There's nothing like particularly there's no cool invention in architecture or a neat trick involved or anything like this."}, {"start": 386.0, "end": 395.0, "text": " It's really just plug basic things together, make them really big, train them for long on a lot of data and you'll get quality."}, {"start": 395.0, "end": 402.0, "text": " So this is the model overview right here, the overview of this party or part time model."}, {"start": 402.0, "end": 410.0, "text": " This is, as I already said, in contrast to Imagen, it is an autoregressive model, so not a diffusion model."}, {"start": 410.0, "end": 417.0, "text": " What happens is that on this side here, you have this VQGAN image encoder and decoder."}, {"start": 417.0, "end": 423.0, "text": " Well, they don't call them encoder and decoder. They call them tokenizer and de tokenizer."}, {"start": 423.0, "end": 431.0, "text": " So if you are not aware, autoregressive models, they work on tokens."}, {"start": 431.0, "end": 438.0, "text": " Now, tokens in usually in natural language processing are words or part of words."}, {"start": 438.0, "end": 443.0, "text": " So these would be tokens, token one, token two and so on until token N."}, {"start": 443.0, "end": 448.0, "text": " And then what you would try to do is you would try always to predict the next token."}, {"start": 448.0, "end": 450.0, "text": " That's what makes it autoregressive."}, {"start": 450.0, "end": 455.0, "text": " You feed in parts of a token sequence, like parts of a sentence. You try to predict the next one."}, {"start": 455.0, "end": 459.0, "text": " That's exactly what you see right here in the architecture."}, {"start": 459.0, "end": 466.0, "text": " So you pass in the start of sentence token. You try to predict the first token and you pass in the first token."}, {"start": 466.0, "end": 472.0, "text": " And then from these two, you try to predict the second token and then put that here from these three."}, {"start": 472.0, "end": 477.0, "text": " You try to predict the third token and so on. That's the autoregressivity. In text, that works well."}, {"start": 477.0, "end": 483.0, "text": " However, in images, it's not quite obvious how to do that."}, {"start": 483.0, "end": 490.0, "text": " That's why you first need to get from the image space to the token space."}, {"start": 490.0, "end": 496.0, "text": " So we need a way for any given image that we get out a sequence of tokens."}, {"start": 496.0, "end": 500.0, "text": " And it can't be the pixels themselves."}, {"start": 500.0, "end": 512.0, "text": " We would like to have tokens that are kind of latent and have sort of a bit of meaning, not just individual pixels, because that, first of all, is too many pixels."}, {"start": 512.0, "end": 521.0, "text": " And second of all, there's not too much, let's say, information in the single pixel."}, {"start": 521.0, "end": 524.0, "text": " So what we do is we have these image tokenizer and de-tokenizer."}, {"start": 524.0, "end": 530.0, "text": " This is a VQGAN that is powered by a vision transformer."}, {"start": 530.0, "end": 535.0, "text": " So essentially, this is a model that takes this image, it ships it through a bunch of layers."}, {"start": 535.0, "end": 542.0, "text": " And at the end, so let's say the image at the beginning has a bunch of rows, a bunch of columns with its pixels."}, {"start": 542.0, "end": 547.0, "text": " This goes through a series of maybe downscalings and so on."}, {"start": 547.0, "end": 556.0, "text": " No, actually, it's because it's a vision transformer, it probably even tokenizes like it patches the image at the very beginning."}, {"start": 556.0, "end": 561.0, "text": " So these would be image patches. Then these are transformed by a transformer to a latent space."}, {"start": 561.0, "end": 565.0, "text": " Maybe they are compressed."}, {"start": 565.0, "end": 569.0, "text": " And then you get tokens."}, {"start": 569.0, "end": 577.0, "text": " So at the end, you can take these things right here or the things that correspond to them in the latent representation."}, {"start": 577.0, "end": 585.0, "text": " You can take those as image tokens and you can unroll essentially this image and then feed it into this model."}, {"start": 585.0, "end": 589.0, "text": " Hey, just a short interjection here from Janek from the future."}, {"start": 589.0, "end": 598.0, "text": " The idea, I forgot, the idea behind the whole setup here is behind the whole VQGAN is obviously that these things here,"}, {"start": 598.0, "end": 603.0, "text": " are tokens, which means that they come from a set vocabulary."}, {"start": 603.0, "end": 613.0, "text": " So the way you train a VQGAN isn't just to give you this latent representation of like token like things, but then you also quantize them."}, {"start": 613.0, "end": 621.0, "text": " So there is also a vocabulary somewhere where you have a set defined set of tokens."}, {"start": 621.0, "end": 626.0, "text": " I believe in their case, they have like eight eight thousand tokens or so."}, {"start": 626.0, "end": 632.0, "text": " And your image tokens must be of these eight thousand."}, {"start": 632.0, "end": 639.0, "text": " So the image has a bunch of tokens, but they all must be one of the things in the vocabulary here."}, {"start": 639.0, "end": 645.0, "text": " Now, the vocabulary is also learned. There are some techniques by which to learn the vocabulary."}, {"start": 645.0, "end": 654.0, "text": " But this quantization is actually what then enables you to treat essentially to treat it as a sequence of language tokens,"}, {"start": 654.0, "end": 657.0, "text": " which also come from a vocabulary. All right."}, {"start": 657.0, "end": 664.0, "text": " Back to Janek in the past. The image tokenizer is trained as an as it says here as a VQGAN,"}, {"start": 664.0, "end": 671.0, "text": " which means that you encode and then you decode again and you try to get out the same image."}, {"start": 671.0, "end": 678.0, "text": " And at the end, this representation here in the middle is really valuable because it's a tokenized representation of an image."}, {"start": 678.0, "end": 684.0, "text": " So you put that into the transformer right here."}, {"start": 684.0, "end": 692.0, "text": " And this is, as we said, an autoregressive model. So it gets as an input, obviously the sequence so far."}, {"start": 692.0, "end": 697.0, "text": " It tries to predict the next image token, but also gets as an input the text."}, {"start": 697.0, "end": 701.0, "text": " So this is the prompt that the user puts in."}, {"start": 701.0, "end": 712.0, "text": " So the prompt is encoded in a transformer encoder and is then fed in as a side input as a target for attention."}, {"start": 712.0, "end": 723.0, "text": " So whenever in the layer here you have queries, keys and values, I'm going to guess the query can also look at the transformer encoder."}, {"start": 723.0, "end": 730.0, "text": " The query can also look at the keys right here. So over here, you'd only have keys and values."}, {"start": 730.0, "end": 740.0, "text": " If you don't know what all of this means, I have a video on attention is all you need, where you can learn how attention mechanisms work."}, {"start": 740.0, "end": 750.0, "text": " So essentially, the way this is trained is the following. You attach a sentence here or a description of an image and you attach an image right here."}, {"start": 750.0, "end": 758.0, "text": " The image is then patched. It is fed through the VQGAN encoder."}, {"start": 758.0, "end": 765.0, "text": " Its latent representation is obtained. That latent representation is put here."}, {"start": 765.0, "end": 778.0, "text": " And then you essentially train a decoder language model that has cross attention into the text representation of the prompt."}, {"start": 778.0, "end": 785.0, "text": " So you simply train this thing right here like you would train a GPT model or any other model."}, {"start": 785.0, "end": 790.0, "text": " And this thing right here is trained, as I said, as an image reconstruction model."}, {"start": 790.0, "end": 799.0, "text": " And this thing right here is trained, I guess, jointly with this. Actually, don't know. This could this could not be true, but I think it is true."}, {"start": 799.0, "end": 805.0, "text": " I think it is trained jointly. So that's the model, as I said, is very basic."}, {"start": 805.0, "end": 811.0, "text": " I wish I could tell you something more interesting right here, but I can't."}, {"start": 811.0, "end": 819.0, "text": " It's a standard, you know, bunch of transformers in sequence. Essentially, every single component right here is a transformer."}, {"start": 819.0, "end": 827.0, "text": " And because every single thing is a transformer, you can scale this thing by a lot."}, {"start": 827.0, "end": 838.0, "text": " By the way, here you can see a bunch of the I'm not going to go into the architectural details quite quite as much."}, {"start": 838.0, "end": 845.0, "text": " But they do also train an upsampler. So they have images of resolution 256 by 256."}, {"start": 845.0, "end": 854.0, "text": " Ultimately, they do train an upsampler as well, where so here this is the upsampler super resolution upsampler,"}, {"start": 854.0, "end": 866.0, "text": " where they can go from their pipeline, which does 256 by 256 to a 1024 by 1024 picture, essentially."}, {"start": 866.0, "end": 873.0, "text": " But this is just upsampling. Right. So there is, I mean, technically no extra information right here."}, {"start": 873.0, "end": 883.0, "text": " This doesn't get to look at the prompt or anything like this. It simply gets to look at this image and then make a four times larger image out of that."}, {"start": 883.0, "end": 890.0, "text": " So where did we leave off? Oh, yeah. I also wanted to say if you now want to get an image out of this thing."}, {"start": 890.0, "end": 896.0, "text": " So not training, but inference. What you do is you attach only the prompt right here."}, {"start": 896.0, "end": 901.0, "text": " Right. You encode the prompt. You put the start of sentence token right here."}, {"start": 901.0, "end": 908.0, "text": " You let the model generate one. Then you put that here, two. Then you put that here, three and so on."}, {"start": 908.0, "end": 913.0, "text": " You let the model generate the image tokens here. You take those image tokens."}, {"start": 913.0, "end": 924.0, "text": " You feed, you arrange it into the latent representation of the VQGAN and you use the decoder right here in order to generate the final image."}, {"start": 924.0, "end": 931.0, "text": " So that's the whole flow. And then you put it through the super resolution if you want that."}, {"start": 931.0, "end": 935.0, "text": " Here you can see the basics, the basic architectural layouts."}, {"start": 935.0, "end": 939.0, "text": " So there is the smallest model has 350 million parameter."}, {"start": 939.0, "end": 947.0, "text": " You can see it has 12 encoder and 12 decoder layer. It's pretty standard transformer scaling laws right here."}, {"start": 947.0, "end": 952.0, "text": " I mean, scaling laws, pretty standard transformer architectural laws."}, {"start": 952.0, "end": 957.0, "text": " They go through a 750 million parameter model, three billion."}, {"start": 957.0, "end": 961.0, "text": " And the last one here has 20 billion parameters."}, {"start": 961.0, "end": 967.0, "text": " So that's a decently sized model. It's not as large as the large language models."}, {"start": 967.0, "end": 972.0, "text": " And they do use things like sparse conv attention and things like this."}, {"start": 972.0, "end": 981.0, "text": " But it is, you know, it's pretty large, I would say. You could not run that at home very easily."}, {"start": 981.0, "end": 993.0, "text": " So where does that get us? They have a big description right here, how they solve this architecturally, how they chart the model, how they use parallelism, which is very interesting."}, {"start": 993.0, "end": 1000.0, "text": " I'm just not an expert at it. So if you're interested, I'll leave you to read this part."}, {"start": 1000.0, "end": 1013.0, "text": " I found the at least the drawings here pretty cool. So apparently this the signal is routed like, you know, like so, like so and so."}, {"start": 1013.0, "end": 1027.0, "text": " So like in like a snake type of arrangement so that always you can pipeline so that always one thing is essentially busy as you send data to the next thing and so on."}, {"start": 1027.0, "end": 1037.0, "text": " But as I said, I'm not the expert in this and I'd rather want to get to the other things, which are the data sets that they use."}, {"start": 1037.0, "end": 1040.0, "text": " So they have three data sets, three main data sets right here."}, {"start": 1040.0, "end": 1050.0, "text": " One is MS Coco. Now MS Coco, as they show right here for the image on the right hand side, it simply says a bowl of broccoli and apples with a utensil."}, {"start": 1050.0, "end": 1061.0, "text": " So it just kind of is a high level description of what's in the image, like an image, simple image caption right for this image right here."}, {"start": 1061.0, "end": 1077.0, "text": " Whereas the localized narratives data set, you can see that its description is way longer. It's more linguistically prosaic, but it is also much more descriptive of the actual image."}, {"start": 1077.0, "end": 1095.0, "text": " Like so the top is if you want to tell someone what's in an image and the bottom is more like if you want to like really paint the picture, like pun intended, or if you want to describe the picture to someone so that they could maybe recreate it in some way."}, {"start": 1095.0, "end": 1107.0, "text": " And it turns out that we are now at the point with these image generation models where they are so good that we need data sets like the bottom one to really push them to their limits."}, {"start": 1107.0, "end": 1123.0, "text": " And not only that, but the authors here find that there are even problems with that because these image data sets, they're always created in a way that an image is given and then the humans are asked to write a description, which is really good because then you have image and description together."}, {"start": 1123.0, "end": 1136.0, "text": " However, the authors here know that this prevents, for example, fantasy pictures like we saw before the raccoon in cubism that it doesn't exist."}, {"start": 1136.0, "end": 1143.0, "text": " So it can't be in any data set or a noob in a leather jacket doesn't exist. So it can't be in any data set."}, {"start": 1143.0, "end": 1157.0, "text": " So while we rely on generalization during training for the model to learn these things, we actually need data sets like that to evaluate whether they can really do these things, right?"}, {"start": 1157.0, "end": 1168.0, "text": " Otherwise we're left with sort of subjective evaluation. So they come up with their own data set, which is called party prompts."}, {"start": 1168.0, "end": 1181.0, "text": " And that's actually also the thing they release as far as I understand it. Obviously, as all of the recent works in big models, this thing isn't released."}, {"start": 1181.0, "end": 1194.0, "text": " There's no code. There's no, I mean, the code would be trivial. There's no weights. There's no training recipe. There's no, some of the data sets are proprietary if I understand correctly."}, {"start": 1194.0, "end": 1200.0, "text": " So the paper is more open about what they do, but still that there is no way of accessing this."}, {"start": 1200.0, "end": 1207.0, "text": " So party prompts. This is a data set that essentially only consists of prompts. So there is no images in this data set."}, {"start": 1207.0, "end": 1217.0, "text": " And I believe the only way you can really assess thing is you can let the model generate stuff and then you can let humans rate it."}, {"start": 1217.0, "end": 1232.0, "text": " That's essentially it. The party prompts, it is pretty interesting because they create these prompts by letting the prompt engineers sort of, they choose, for example, a challenge."}, {"start": 1232.0, "end": 1250.0, "text": " So the challenge might be perspective, right, which could be, you know, I need a prompt that asks for some object in some specific perspective that is unusual or quantity."}, {"start": 1250.0, "end": 1260.0, "text": " Like I need a prompt that asks for a given number of things because we know that these models, they're not super good at counting."}, {"start": 1260.0, "end": 1269.0, "text": " Right. I mean, we also thought the models aren't super good at spelling. And now it turns out, well, if we just make them bigger, they are."}, {"start": 1269.0, "end": 1276.0, "text": " So, you know, I'm fairly confident they're going to be good at counting in a short while."}, {"start": 1276.0, "end": 1284.0, "text": " That's the challenge. There's also, if I recall correctly, this is this upper table right here, like categories."}, {"start": 1284.0, "end": 1297.0, "text": " So there are categories, animals, there are categories, illustrations and so on. So you can see this is a diverse set of category challenge combinations and they make a bunch of prompts for each one."}, {"start": 1297.0, "end": 1307.0, "text": " I think they have about 1600 prompts in total in this party prompt eval set, which is a pretty neat thing to have, even if it comes without images."}, {"start": 1307.0, "end": 1320.0, "text": " So now they train the thing with their whole architectural shabangs with the parallelism and the pipelining and the yada, yada, yada on TPU v4, I think."}, {"start": 1320.0, "end": 1331.0, "text": " So this is a huge operation. So what does that give us? I want to just jump the evals here on the metrics because yes, yes, yes, they're very good, very good."}, {"start": 1331.0, "end": 1343.0, "text": " They're also very good as rated by humans, humans very good, which is what's interesting is they have, for example, a retrieval baseline, which simply retrieves images from the training data set."}, {"start": 1343.0, "end": 1353.0, "text": " And even if the obviously image text match, the party model wins because you can actually create an image and not retrieve one."}, {"start": 1353.0, "end": 1361.0, "text": " But even in image realism, you can see the retrieval is only slightly higher in realism, right?"}, {"start": 1361.0, "end": 1375.0, "text": " Every single image is real that the retrieval retrieves. And still the humans rate the realism of party almost the same, which is quite speaking for the model."}, {"start": 1375.0, "end": 1385.0, "text": " The loss curves are also pretty interesting, especially interesting that the 20 billion model here, it takes quite a time to come down here, right?"}, {"start": 1385.0, "end": 1401.0, "text": " It kind of has to get surpassed by the three billion model initially and then overtakes it, which maybe means that we haven't exactly found the right training recipes yet for these largest of models."}, {"start": 1401.0, "end": 1410.0, "text": " So this now is the cool part where they put the model, the models next to one another."}, {"start": 1410.0, "end": 1418.0, "text": " So this is the same prompt with all of these different models. And you can just see where scale gets you."}, {"start": 1418.0, "end": 1430.0, "text": " This is a portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House, holding a sign on the chest that says Welcome Friends."}, {"start": 1430.0, "end": 1440.0, "text": " And you can see my this, these these things right here, this and this, there may be like Dolly Mini kind of style pictures."}, {"start": 1440.0, "end": 1445.0, "text": " And there are also that scale, right? And then we go to the three B model."}, {"start": 1445.0, "end": 1454.0, "text": " And this is something that would be familiar maybe from something like Dolly or Dolly, maybe between Dolly and Dolly too, right?"}, {"start": 1454.0, "end": 1463.0, "text": " These things you can see they're bad at spelling, but as soon as you go bigger, all of a sudden, Welcome Friends, bada boom, there it is."}, {"start": 1463.0, "end": 1470.0, "text": " Not bad at spelling anymore. All you need to scale. That's crazy. The sign, very deep learning."}, {"start": 1470.0, "end": 1486.0, "text": " Look, as the model learns to spell, initially it can only do Russian or whatever and and just eventually it would actually be funny if that was like actual Russian and it said very deep learning."}, {"start": 1486.0, "end": 1493.0, "text": " Can you imagine how crazy that would be? Well, in any case, and also the Grand Canyon, right?"}, {"start": 1493.0, "end": 1501.0, "text": " So there's kind of structure here and so on, but this very, very deep learning. Perfect."}, {"start": 1501.0, "end": 1510.0, "text": " A blue Porsche parked in front of a yellow brick wall. You can see it doesn't always work."}, {"start": 1510.0, "end": 1516.0, "text": " But it works better and better and better with scale. Crazy."}, {"start": 1516.0, "end": 1526.0, "text": " And here this is like maybe like is this a direct shot at Gary Marcus because the challenge is like an astronaut riding a horse."}, {"start": 1526.0, "end": 1532.0, "text": " So astronaut riding a horse in the forest, even the three billion model."}, {"start": 1532.0, "end": 1539.0, "text": " Oh, no, it's going to be a horse riding an astronaut, which is going to come up later. And I promise it's going to be funny."}, {"start": 1539.0, "end": 1547.0, "text": " But yeah, an astronaut riding a horse in the water in front of them, water lilies and so on."}, {"start": 1547.0, "end": 1555.0, "text": " A map of the United States made out of sushi. So as you can see, these these results are fairly insane."}, {"start": 1555.0, "end": 1562.0, "text": " Infinity, the back of a violin, four cats surrounding a dog. So now they're really testing these individual categories."}, {"start": 1562.0, "end": 1569.0, "text": " Infinity is an abstract concept. Back of violin is perspective. Four cats surrounding a dog is this quantity metric."}, {"start": 1569.0, "end": 1579.0, "text": " You can you can see there are four cats. Right. So, yeah, I'm pretty confident that with with scale, these types of problems are going to be solved."}, {"start": 1579.0, "end": 1592.0, "text": " Scroll gives an apple to a bird. Yeah, so. What's interesting is they have this narrative of what they call growing a cherry tree."}, {"start": 1592.0, "end": 1602.0, "text": " So obviously these samples here are cherry picked, which means that they take out whatever they think are good samples to present in the paper."}, {"start": 1602.0, "end": 1614.0, "text": " However, they detail fairly extensively how they arrive at this thing. So what they do is they don't just come up with these long prompts by themselves."}, {"start": 1614.0, "end": 1626.0, "text": " Well, these aren't long. OK, but, you know, these long prompts with Anubis in front in a leather jacket in front of Los Angeles skyline, they don't just come up with them on the spot."}, {"start": 1626.0, "end": 1639.0, "text": " They have a process of coming up with them. And the process is detailed here. So, for example, they have this idea of combining like a sloth with a van."}, {"start": 1639.0, "end": 1648.0, "text": " Right. So they start by just exploring the model and entering things like a smiling sloth, like what comes out. Right."}, {"start": 1648.0, "end": 1659.0, "text": " And a van parked on grass. There are always good images and bad images that turn out and they sort of learn how to have to tweak the prompt to get what they want."}, {"start": 1659.0, "end": 1673.0, "text": " Once they're happy, they go on. So they modify the prompt a bit. So there is the smiling sloth wearing a leather jacket, a cowboy hat and a kilt or wearing a bow tie and holding a quarter staff."}, {"start": 1673.0, "end": 1684.0, "text": " So they kind of explore. They go more and more, as you can see, as you go down this tree, this cherry tree, as they call it, they go down and down."}, {"start": 1684.0, "end": 1692.0, "text": " They detail. Well, sometimes there's problems. This one, I believe, has two two arms on this side and so on."}, {"start": 1692.0, "end": 1699.0, "text": " So but still they refine and refine and refine. They finally try to combine them. Right."}, {"start": 1699.0, "end": 1706.0, "text": " Yeah, here is here is a combination. They refine again. They try to combine the two prompts again."}, {"start": 1706.0, "end": 1717.0, "text": " And at the end, they get to something that they might be happy with, for example, the thing here on the left, like this one right here."}, {"start": 1717.0, "end": 1723.0, "text": " But I found this pretty interesting, like this process of arriving at these things."}, {"start": 1723.0, "end": 1745.0, "text": " So you can't just enter any old long sentence and expect the model to do well. But what turns what might what will work often better, at least as they describe it, is to go through this process right here, which also means that full artistic freedom is a bit away."}, {"start": 1745.0, "end": 1758.0, "text": " So it is almost like, yes, you are guiding the model with your inputs, but also the model is kind of guiding you by what it does well and what it doesn't do well if you go via this process."}, {"start": 1758.0, "end": 1769.0, "text": " And if you don't go via this process, then I guess you can expect that you you can expect that it might not work as well."}, {"start": 1769.0, "end": 1788.0, "text": " So they also have some failure cases, which is pretty cool. For example, the failure cases like color bleeding, where you describe the color of one of the things in the image and sort of the other take on that that color."}, {"start": 1788.0, "end": 1801.0, "text": " There's also counting failures and so on, localization failures. For example, here the prompt is, the prompt is."}, {"start": 1801.0, "end": 1808.0, "text": " Oh, yeah, the Great Pyramid of Giza situated in front of Mount Everest. That's the bottom two pictures should be that."}, {"start": 1808.0, "end": 1820.0, "text": " You can see this, OK, I mean, this isn't this isn't too bad, but this here is just like the pyramid with sort of a Mount Everest cover, right?"}, {"start": 1820.0, "end": 1833.0, "text": " You can see these models, they sometimes if they can't fulfill the problem directly, they'll kind of mix, they'll just try to get it done somehow and get it really close in text embedding space."}, {"start": 1833.0, "end": 1841.0, "text": " That's exactly what you can see right here. There's a bunch of examples."}, {"start": 1841.0, "end": 1859.0, "text": " And this one, I told you, it's the horse riding on an astronaut. So they have to actually specify the horse is sitting on an astronaut because the riding is just is just riding indicates too much that the horse is on the bottom."}, {"start": 1859.0, "end": 1869.0, "text": " But I just found the horse riding on the astronaut to be absolutely hilarious, especially this one."}, {"start": 1869.0, "end": 1880.0, "text": " Yeah, but all in all, I guess what I wanted to say is that this is complaining on a on a very, very high level, right?"}, {"start": 1880.0, "end": 1894.0, "text": " The paper itself is like moving the goal posts already by sort of criticizing itself for, oh, well, I specified like nine apples in a perfect arrangement."}, {"start": 1894.0, "end": 1902.0, "text": " I don't have or right at ten red apples and it's only eight red apples. Like what?"}, {"start": 1902.0, "end": 1917.0, "text": " What a loser model. Look at that. I mean, this is it is crazy good how these models are and the failure cases here are, you know, yes, they're failure cases."}, {"start": 1917.0, "end": 1930.0, "text": " But I don't think that if you told me three, four years ago that this is the type of error that we're at solving that I would have said, yeah, I believe that."}, {"start": 1930.0, "end": 1942.0, "text": " I would have way guessed we're still at the point where, you know, we we have mode collapses, we can't create most of the text stuff, we have artifacts and all kinds of things."}, {"start": 1942.0, "end": 1953.0, "text": " And I think this is yeah, it's it's kind of mind blowing how fast the progress here is obviously half a year ago or so."}, {"start": 1953.0, "end": 1965.0, "text": " Yeah, I would have expected something like this, but I believe, yeah, a lot of people must be very surprised and including me."}, {"start": 1965.0, "end": 1972.0, "text": " Yeah, like spelling mistakes, like complaining that, you know, sometimes text is still not spelled right."}, {"start": 1972.0, "end": 1983.0, "text": " No, even though, right, Dali couldn't do it at all. And now this thing is doing it almost perfectly, as you can see right here, combining abstract concepts."}, {"start": 1983.0, "end": 1991.0, "text": " Look at the thing on top. It's it's insane. Or here like, oh, this leg is in behind the race car."}, {"start": 1991.0, "end": 1998.0, "text": " Come on. This is better than I guess anyone had expected."}, {"start": 1998.0, "end": 2006.0, "text": " So, yeah, I don't want to waste your time too much more. I just thought this was absolutely cool."}, {"start": 2006.0, "end": 2015.0, "text": " And I'm very excited to see where this is going next. Of course, huge bummer that we don't get access to this."}, {"start": 2015.0, "end": 2026.0, "text": " I hope this finds its way into some products that we can use. As you know, I'm all for these companies making making money with their inventions."}, {"start": 2026.0, "end": 2035.0, "text": " I mean, I think it's cool that they are inventing and, you know, if they want to make some cash off of it, you know, good for them."}, {"start": 2035.0, "end": 2048.0, "text": " But I do hope that we actually get to use it. And I it's going to be a fun future where for every presentation or anything, if you need like an illustration, you just you just type it right."}, {"start": 2048.0, "end": 2056.0, "text": " You don't go to the Internet to search an appropriate stock photo. You just type it. It's so cool. Or you want to change something in a picture."}, {"start": 2056.0, "end": 2062.0, "text": " You just erase it. You just say, well, ever here, change that part to something else. So cool."}, {"start": 2062.0, "end": 2069.0, "text": " No Photoshop skills anymore. No drawing skills anymore. Just you and your mind and your creativity."}, {"start": 2069.0, "end": 2080.0, "text": " All right. That was it. As I said, the paper presented in this new system is fairly simple. All it does is scale a bunch of transformers in sequence."}, {"start": 2080.0, "end": 2087.0, "text": " Essentially, I presented a evaluation benchmark, these party prompts, and it presented."}, {"start": 2087.0, "end": 2099.0, "text": " Yeah, their their model, which is ridiculously insane. That was it for me. Let me know what you think. And I'll see you around. Bye bye."}]
Generative Models
https://www.youtube.com/watch?v=af6WPqvzjjk
[ML News] Text-to-Image models are taking over! (Imagen, DALL-E 2, Midjourney, CogView 2 & more)
"#mlnews #dalle #imagen \n\nAll things text-to-image models like DALL-E and Imagen!\n\nOUTLINE:\n0:0(...TRUNCATED)
" Google releases imagine an unprecedented text to image model, cog view two improves drastically ov(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.640000000000001, \"text\": \" Google releases imagine an unprecedented(...TRUNCATED)
Generative Models
https://www.youtube.com/watch?v=YQ2QtKcK2dA
The Man behind Stable Diffusion
"#stablediffusion #ai #stabilityai\n\nAn interview with Emad Mostaque, founder of Stability AI.\n\nO(...TRUNCATED)
" This is a mud, a mud is very rich, and he wants to put that money to good use. So just a few days (...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.5, \"text\": \" This is a mud, a mud is very rich, and he wants to put(...TRUNCATED)
Yannic Kilchner
https://www.youtube.com/watch?v=ZTs_mXwMCs8
Galactica: A Large Language Model for Science (Drama & Paper Review)
"#ai #galactica #meta\n\nGalactica is a language model trained on a curated corpus of scientific doc(...TRUNCATED)
" Hello, this video starts out with a review of the drama around the public demo of the Galactica mo(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 5.92, \"text\": \" Hello, this video starts out with a review of the dra(...TRUNCATED)
Yannic Kilchner
https://www.youtube.com/watch?v=TOo-HnjjuhU
[ML News] Multiplayer Stable Diffusion | OpenAI needs more funding | Text-to-Video models incoming
"#mlnews #ai #mlinpl\n\nYour news from the world of Machine Learning!\n\nOUTLINE:\n0:00 - Introducti(...TRUNCATED)
" A lot of text to video models have recently come out, but not only that, a lot of other stuff has (...TRUNCATED)
"[{\"start\": 0.0, \"end\": 5.54, \"text\": \" A lot of text to video models have recently come out,(...TRUNCATED)
Yannic Kilchner
https://www.youtube.com/watch?v=W5M-dvzpzSQ
The New AI Model Licenses have a Legal Loophole (OpenRAIL-M of BLOOM, Stable Diffusion, etc.)
"#ai #stablediffusion #license \n\nSo-called responsible AI licenses are stupid, counterproductive, (...TRUNCATED)
" The new responsible AI licenses that models like stable diffusion or bloom have are stupid. They c(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 8.02, \"text\": \" The new responsible AI licenses that models like stab(...TRUNCATED)
Yannic Kilchner
https://www.youtube.com/watch?v=_NMQyOu2HTo
ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview)
"#ai #language #knowledge \n\nLarge Language Models have the ability to store vast amounts of facts (...TRUNCATED)
" Hello, today we're talking about locating and editing factual associations in GPT by Kevin Meng, D(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 5.84, \"text\": \" Hello, today we're talking about locating and editing(...TRUNCATED)
Yannic Kilchner
https://www.youtube.com/watch?v=igS2Wy8ur5U
Is Stability turning into OpenAI?
"#stablediffusion #aiart #openai \n\nStability AI has stepped into some drama recently. They are acc(...TRUNCATED)
" stability AI has a few growing pains in the recent weeks, they found themselves in multiple contro(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.8, \"text\": \" stability AI has a few growing pains in the recent wee(...TRUNCATED)
Yannic Kilchner
https://www.youtube.com/watch?v=_okxGdHM5b8
Neural Networks are Decision Trees (w/ Alexander Mattick)
"#neuralnetworks #machinelearning #ai \n\nAlexander Mattick joins me to discuss the paper \"Neural N(...TRUNCATED)
" Hello everyone, today we're talking about neural networks and decision trees. I have Alexander Mad(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 10.5, \"text\": \" Hello everyone, today we're talking about neural netw(...TRUNCATED)

Dataset Card for "yannic_test2"

More Information needed

Downloads last month
0
Edit dataset card