|
# Imagenet.int8: Entire Imagenet dataset in 5GB. |
|
|
|
|
|
Find 138 GB of imagenet dataset too bulky? Did you know entire imagenet actually just fits inside the ram of apple watch? |
|
|
|
* Center-croped, resized to 256x256 |
|
* VAE compressed with [SDXL's VAE](https://huggingface.co/stabilityai/sdxl-vae) |
|
* Further quantized to int8 near-lossless manner, compressing the entire training dataset of 1,281,167 images down to just 5GB! |
|
|
|
Introducing Imagenet.int8, the new MNIST of 2024. After the great popularity of the Latent Diffusion era (Thank you stable diffusion!), its *almost* the standard to use VAE version of the imagenet for diffusion-model training. As you might know, lot of great diffusion research is based on latent variation of the imagenet. |
|
|
|
These include: |
|
|
|
* [DiT](https://arxiv.org/abs/2212.09748) |
|
* [Improving Traning Dynamics](https://arxiv.org/abs/2312.02696v1) |
|
* [SiT](https://arxiv.org/abs/2401.08740) |
|
* [U-ViT](https://openaccess.thecvf.com/content/CVPR2023/html/Bao_All_Are_Worth_Words_A_ViT_Backbone_for_Diffusion_Models_CVPR_2023_paper.html) |
|
* [Min-SNR](https://openaccess.thecvf.com/content/ICCV2023/html/Hang_Efficient_Diffusion_Training_via_Min-SNR_Weighting_Strategy_ICCV_2023_paper.html) |
|
* [MDT](https://openaccess.thecvf.com/content/ICCV2023/papers/Gao_Masked_Diffusion_Transformer_is_a_Strong_Image_Synthesizer_ICCV_2023_paper.pdf) |
|
|
|
... but so little material online on the actual preprocessed dataset. I'm here to fix that. One thing I noticed was that latent doesn't have to be full precision! Indeed, they can be as small as int-8, and it won't hurt! Here are some of the examples: |
|
|
|
<p align="center"> |
|
<img src="contents/monkey.png" alt="small" width="200"> |
|
<img src="contents/monkey_torch.float32.png" alt="small" width="200"> |
|
<img src="contents/monkey_torch.uint8.png" alt="small" width="200"> |
|
</p> |
|
|
|
| original, reconstructed from float16, reconstructed from uint8* |
|
|
|
|
|
So clearly, it doesn't make sense to download entire Imagenet and do VAE everytime. Just download this, `to('cuda')` the entire dataset just to flex, and call it a day.😌 |
|
|
|
(BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.) |