label
class label
3 classes
latent
sequence
1dog
[[[1.7529813051223755,0.36460623145103455,0.6627963781356812,2.0320987701416016,0.5927016139030457,0(...TRUNCATED)
1dog
[[[-0.44811850786209106,-1.14934241771698,0.5447549223899841,-0.25110945105552673,-0.216679111123085(...TRUNCATED)
1dog
[[[1.4176586866378784,1.2086617946624756,0.2259187549352646,0.7265773415565491,0.49093446135520935,0(...TRUNCATED)
1dog
[[[0.3958350718021393,1.3762603998184204,0.6745410561561584,0.484909325838089,1.060559868812561,1.60(...TRUNCATED)
1dog
[[[3.061861991882324,1.893720030784607,2.7099084854125977,2.1509082317352295,2.0051753520965576,2.37(...TRUNCATED)
1dog
[[[0.9097663164138794,1.032558560371399,0.5613154172897339,0.5591904520988464,0.07511576265096664,0.(...TRUNCATED)
1dog
[[[1.5051329135894775,1.2470828294754028,1.2106292247772217,1.008702278137207,0.9444816708564758,1.1(...TRUNCATED)
1dog
[[[1.2378919124603271,1.0010429620742798,1.0633827447891235,1.364830493927002,1.0504471063613892,1.2(...TRUNCATED)
1dog
[[[0.15904970467090607,0.5264366269111633,0.003161629429087043,0.9350799918174744,-0.022450106218457(...TRUNCATED)
1dog
[[[1.143539547920227,0.8696291446685791,0.9371546506881714,1.2813942432403564,0.9459033608436584,0.9(...TRUNCATED)

Dataset Card for "latent_afhqv2_512px"

Each image is cropped to 512px square and encoded to a 4x64x64 latent representation using the same VAE as that employed by Stable Diffusion

Decoding

from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 64, 3264
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
    image = vae.decode(latent).sample[0] # Decode 
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
Downloads last month
4
Edit dataset card