cloneofsimo
commited on
Commit
•
57ac608
1
Parent(s):
d1c28a9
Update README.md
Browse files
README.md
CHANGED
@@ -9,16 +9,16 @@ viewer: false
|
|
9 |
<img src="contents/vae.png" alt="small" width="800">
|
10 |
</p>
|
11 |
|
12 |
-
|
13 |
|
14 |
|
15 |
Find 138 GB of imagenet dataset too bulky? Did you know entire imagenet actually just fits inside the ram of apple watch?
|
16 |
|
17 |
-
* Center-croped
|
18 |
* VAE compressed with [SDXL's VAE](https://huggingface.co/stabilityai/sdxl-vae)
|
19 |
* Further quantized to int8 near-lossless manner, compressing the entire training dataset of 1,281,167 images down to just 5GB!
|
20 |
|
21 |
-
Introducing Imagenet.int8, the new MNIST of 2024. After the great popularity of the Latent Diffusion
|
22 |
|
23 |
These include:
|
24 |
|
@@ -29,9 +29,9 @@ These include:
|
|
29 |
* [Min-SNR](https://openaccess.thecvf.com/content/ICCV2023/html/Hang_Efficient_Diffusion_Training_via_Min-SNR_Weighting_Strategy_ICCV_2023_paper.html)
|
30 |
* [MDT](https://openaccess.thecvf.com/content/ICCV2023/papers/Gao_Masked_Diffusion_Transformer_is_a_Strong_Image_Synthesizer_ICCV_2023_paper.pdf)
|
31 |
|
32 |
-
... but so little material online on the actual preprocessed dataset. I'm here to fix that. One thing I noticed was that latent doesn't have to be full precision! Indeed, they can be as small as int-8, and it
|
33 |
|
34 |
-
So clearly, it doesn't make sense to download entire Imagenet and
|
35 |
|
36 |
(BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.)
|
37 |
|
@@ -91,12 +91,13 @@ train_dataloader = torch.utils.data.DataLoader(
|
|
91 |
batch_size=32,
|
92 |
num_workers=3,
|
93 |
)
|
|
|
94 |
|
|
|
95 |
|
96 |
-
|
97 |
###### Example Usage. Let's see if we can get the 5th image. BTW shuffle plz
|
98 |
|
99 |
-
|
100 |
model = "stabilityai/your-stable-diffusion-model"
|
101 |
vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae").to("cuda:0")
|
102 |
|
@@ -115,4 +116,20 @@ print(f"idx: {idx}, text_label: {text_label}, latent: {vae_latent.shape}")
|
|
115 |
x = vae.decode(vae_latent.cuda()).sample
|
116 |
img = VaeImageProcessor().postprocess(image = x.detach(), do_denormalize = [True, True])[0]
|
117 |
img.save("5th_image.png")
|
118 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
<img src="contents/vae.png" alt="small" width="800">
|
10 |
</p>
|
11 |
|
12 |
+
*original, reconstructed from float16, reconstructed from uint8*
|
13 |
|
14 |
|
15 |
Find 138 GB of imagenet dataset too bulky? Did you know entire imagenet actually just fits inside the ram of apple watch?
|
16 |
|
17 |
+
* Resized, Center-croped to 256x256
|
18 |
* VAE compressed with [SDXL's VAE](https://huggingface.co/stabilityai/sdxl-vae)
|
19 |
* Further quantized to int8 near-lossless manner, compressing the entire training dataset of 1,281,167 images down to just 5GB!
|
20 |
|
21 |
+
Introducing Imagenet.int8, the new MNIST of 2024. After the great popularity of the [Latent Diffusion](https://arxiv.org/abs/2112.10752) (Thank you stable diffusion!), its *almost* the standard to use VAE version of the imagenet for diffusion-model training. As you might know, lot of great diffusion research is based on latent variation of the imagenet.
|
22 |
|
23 |
These include:
|
24 |
|
|
|
29 |
* [Min-SNR](https://openaccess.thecvf.com/content/ICCV2023/html/Hang_Efficient_Diffusion_Training_via_Min-SNR_Weighting_Strategy_ICCV_2023_paper.html)
|
30 |
* [MDT](https://openaccess.thecvf.com/content/ICCV2023/papers/Gao_Masked_Diffusion_Transformer_is_a_Strong_Image_Synthesizer_ICCV_2023_paper.pdf)
|
31 |
|
32 |
+
... but so little material online on the actual preprocessed dataset. I'm here to fix that. One thing I noticed was that latent doesn't have to be full precision! Indeed, they can be as small as int-8, and it doesn't hurt!
|
33 |
|
34 |
+
So clearly, it doesn't make sense to download entire Imagenet and process with VAE everytime. Just download this, `to('cuda')` the entire dataset just to flex, and call it a day.😌
|
35 |
|
36 |
(BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.)
|
37 |
|
|
|
91 |
batch_size=32,
|
92 |
num_workers=3,
|
93 |
)
|
94 |
+
```
|
95 |
|
96 |
+
Thats the dataloader! Now, below is the example usage. Notice how you have to reshape as data is flattened.
|
97 |
|
98 |
+
```
|
99 |
###### Example Usage. Let's see if we can get the 5th image. BTW shuffle plz
|
100 |
|
|
|
101 |
model = "stabilityai/your-stable-diffusion-model"
|
102 |
vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae").to("cuda:0")
|
103 |
|
|
|
116 |
x = vae.decode(vae_latent.cuda()).sample
|
117 |
img = VaeImageProcessor().postprocess(image = x.detach(), do_denormalize = [True, True])[0]
|
118 |
img.save("5th_image.png")
|
119 |
+
```
|
120 |
+
|
121 |
+
Enjoy!
|
122 |
+
|
123 |
+
# Citation
|
124 |
+
|
125 |
+
```bibtex
|
126 |
+
@misc{imagenet_int8,
|
127 |
+
author = {Simo Ryu},
|
128 |
+
title = {Imagenet.int8: Entire Imagenet dataset in 5GB},
|
129 |
+
year = 2024,
|
130 |
+
publisher = {Hugging Face Datasets},
|
131 |
+
url = {https://huggingface.co/datasets/cloneofsimo/imagenet.int8},
|
132 |
+
note = {Entire Imagenet dataset compressed to 5GB using VAE and quantized with int8}
|
133 |
+
}
|
134 |
+
```
|
135 |
+
|