Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,11 @@
|
|
1 |
-
[https://github.com/liaopeiyuan/artbench](ArtBench) samples encoded to SDXL latents via [Ollin VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix).
|
2 |
|
3 |
Dataset created using [this script](https://github.com/Birch-san/sdxl-diffusion-decoder/blob/main/script/make_sdxl_latent_dataset.py).
|
4 |
|
|
|
|
|
|
|
|
|
5 |
Schema/usage:
|
6 |
|
7 |
```python
|
@@ -11,8 +15,8 @@ Sample = TypedDict('Sample', {
|
|
11 |
'__key__': str,
|
12 |
'__url__': str,
|
13 |
'cls.txt': bytes, # UTF-8 encoded class id from 0 to 9 inclusive
|
14 |
-
'img.png': bytes, # PIL image, serialized
|
15 |
-
'latent.pth': bytes, # FloatTensor, serialized
|
16 |
})
|
17 |
|
18 |
it: Iterator[Sample] = WebDataset('train/{00000..00004}.tar')
|
|
|
1 |
+
[https://github.com/liaopeiyuan/artbench](ArtBench) samples encoded to float16 SDXL latents via [Ollin VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix).
|
2 |
|
3 |
Dataset created using [this script](https://github.com/Birch-san/sdxl-diffusion-decoder/blob/main/script/make_sdxl_latent_dataset.py).
|
4 |
|
5 |
+
Didn't bother saving mean & logvar, because variance is low enough it's not worth the doubling of filesize to retain.
|
6 |
+
Sampled from diagonal gaussian distribution, saved the resulting latents.
|
7 |
+
Also kept the original image.
|
8 |
+
|
9 |
Schema/usage:
|
10 |
|
11 |
```python
|
|
|
15 |
'__key__': str,
|
16 |
'__url__': str,
|
17 |
'cls.txt': bytes, # UTF-8 encoded class id from 0 to 9 inclusive
|
18 |
+
'img.png': bytes, # PIL image, serialized. 256*256px
|
19 |
+
'latent.pth': bytes, # FloatTensor, serialized. 32*32 latents
|
20 |
})
|
21 |
|
22 |
it: Iterator[Sample] = WebDataset('train/{00000..00004}.tar')
|