johnowhitaker commited on
Commit
aedd297
1 Parent(s): aad5ece

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -37,4 +37,33 @@ dataset_info:
37
  ---
38
  # Dataset Card for "latent_lsun_church_256px"
39
 
40
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ---
38
  # Dataset Card for "latent_lsun_church_256px"
39
 
40
+ This is derived from https://huggingface.co/datasets/tglcourse/lsun_church_train
41
+
42
+ Each image is cropped to 256px square and encoded to a 32x32x4 latent representation using the same VAE as that employed by Stable Diffusion
43
+
44
+ Decoding
45
+ ```python
46
+ from diffusers import AutoencoderKL
47
+ from datasets import load_dataset
48
+ from PIL import Image
49
+ import numpy as np
50
+ import torch
51
+
52
+ # load the dataset
53
+ dataset = load_dataset('tglcourse/latent_lsun_church_256px')
54
+
55
+ # Load the VAE (requires access - see repo model card for info)
56
+ vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
57
+
58
+ latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
59
+ latent = (1 / 0.18215) * latent # Scale to match SD implementation
60
+ with torch.no_grad():
61
+ image = vae.decode(latent).sample[0] # Decode
62
+ image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
63
+ image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
64
+ image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
65
+ image = Image.fromarray(image) # To PIL
66
+ image # The resulting PIL image
67
+
68
+ ```
69
+