sdxl-vae-fp16-fix / README.md
madebyollin's picture
Update README.md
8ab04db
|
raw
history blame
No virus
1.45 kB
metadata
license: mit
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
inference: false

SDXL-VAE-FP16-Fix

SDXL-VAE-FP16-Fix is the SDXL VAE, but modified to run in fp16 precision without generating NaNs.

from diffusers import DiffusionPipeline, AutoencoderKL
import torch

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-0.9", torch_dtype=torch.float16, variant="fp16").to("cuda")
fixed_vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix").half().to("cuda")

prompt = "An astronaut riding a green horse"
latents = pipe(prompt=prompt, output_type="latent").images

for vae in (pipe.vae, fixed_vae):
    for dtype in (torch.float32, torch.float16):
        with torch.no_grad(), torch.cuda.amp.autocast(dtype=torch.float16, enabled=(dtype==torch.float16)):
            print(dtype, "sdxl-vae" if vae == pipe.vae else "sdxl-vae-fp16-fix")
            display(pipe.image_processor.postprocess(vae.decode(latents / vae.config.scaling_factor).sample)[0])
VAE Decoding in float32 precision Decoding in float16 precision
SDXL-VAE ⚠️
SDXL-VAE-FP16-Fix