cloneofsimo
commited on
Commit
•
75fcb6a
1
Parent(s):
fbba2c2
Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -31,4 +31,86 @@ These include:
|
|
31 |
|
32 |
So clearly, it doesn't make sense to download entire Imagenet and do VAE everytime. Just download this, `to('cuda')` the entire dataset just to flex, and call it a day.😌
|
33 |
|
34 |
-
(BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
So clearly, it doesn't make sense to download entire Imagenet and do VAE everytime. Just download this, `to('cuda')` the entire dataset just to flex, and call it a day.😌
|
33 |
|
34 |
+
(BTW If you think you'll need higher precision, you can always further fine-tune your model on higher precision. But I doubt that.)
|
35 |
+
|
36 |
+
|
37 |
+
# How to use?
|
38 |
+
|
39 |
+
First download this.
|
40 |
+
|
41 |
+
```bash
|
42 |
+
do huggingface-cli download --repo-type dataset cloneofsimo/imagenet.int8 --local-dir ./vae_mds
|
43 |
+
```
|
44 |
+
|
45 |
+
Then, you need to install [streaming dataset](https://github.com/mosaicml/streaming) to use this. The dataset is MDS format.
|
46 |
+
|
47 |
+
```bash
|
48 |
+
pip install mosaicml-streaming
|
49 |
+
```
|
50 |
+
|
51 |
+
Then, you can use this dataset like this:
|
52 |
+
|
53 |
+
```python
|
54 |
+
|
55 |
+
from streaming.base.format.mds.encodings import Encoding, _encodings
|
56 |
+
import numpy as np
|
57 |
+
from typing import Any
|
58 |
+
import torch
|
59 |
+
from streaming import StreamingDataset
|
60 |
+
from diffusers.models import AutoencoderKL
|
61 |
+
from diffusers.image_processor import VaeImageProcessor
|
62 |
+
|
63 |
+
class uint8(Encoding):
|
64 |
+
def encode(self, obj: Any) -> bytes:
|
65 |
+
return obj.tobytes()
|
66 |
+
|
67 |
+
def decode(self, data: bytes) -> Any:
|
68 |
+
x= np.frombuffer(data, np.uint8).astype(np.float32)
|
69 |
+
return (x / 255.0 - 0.5) * 24.0
|
70 |
+
|
71 |
+
_encodings["uint8"] = uint8
|
72 |
+
|
73 |
+
|
74 |
+
remote_train_dir = "./vae_mds" # this is the path you installed this dataset.
|
75 |
+
local_train_dir = "./local_train_dir"
|
76 |
+
|
77 |
+
train_dataset = StreamingDataset(
|
78 |
+
local=local_train_dir,
|
79 |
+
remote=remote_train_dir,
|
80 |
+
split=None,
|
81 |
+
shuffle=True,
|
82 |
+
shuffle_algo="naive",
|
83 |
+
num_canonical_nodes=1,
|
84 |
+
batch_size = 32
|
85 |
+
)
|
86 |
+
|
87 |
+
train_dataloader = torch.utils.data.DataLoader(
|
88 |
+
train_dataset,
|
89 |
+
batch_size=32,
|
90 |
+
num_workers=3,
|
91 |
+
)
|
92 |
+
|
93 |
+
|
94 |
+
|
95 |
+
###### Example Usage. Let's see if we can get the 5th image. BTW shuffle plz
|
96 |
+
|
97 |
+
|
98 |
+
model = "stabilityai/your-stable-diffusion-model"
|
99 |
+
vae = AutoencoderKL.from_pretrained("stabilityai/sdxl-vae").to("cuda:0")
|
100 |
+
|
101 |
+
|
102 |
+
batch = next(iter(train_dataloader))
|
103 |
+
|
104 |
+
i = 5
|
105 |
+
vae_latent = batch["vae_output"].reshape(-1, 4, 32, 32)[i:i+1].cuda().float()
|
106 |
+
idx = batch["label"][i]
|
107 |
+
text_label = batch['label_as_text'][i]
|
108 |
+
|
109 |
+
print(f"idx: {idx}, text_label: {text_label}, latent: {vae_latent.shape}")
|
110 |
+
# idx: 402, text_label: acoustic guitar, latent: torch.Size([1, 4, 32, 32])
|
111 |
+
|
112 |
+
# example decoding
|
113 |
+
x = vae.decode(vae_latent.cuda()).sample
|
114 |
+
img = VaeImageProcessor().postprocess(image = x.detach(), do_denormalize = [True, True])[0]
|
115 |
+
img.save("5th_image.png")
|
116 |
+
```
|