rootonchair's picture
Update README.md
d3cecfd verified
---
license: mit
---
# Diffusers API of Transparent Image Layer Diffusion using Latent Transparency
Create transparent image with Diffusers!
![corgi](result_sdxl.png)
Please check the Github repo [here](https://github.com/rootonchair/diffuser_layerdiffuse)
This is a port to Diffuser from original [SD Webui's Layer Diffusion](https://github.com/layerdiffusion/sd-forge-layerdiffuse) to extend the ability to generate transparent image with your favorite API
Paper: [Transparent Image Layer Diffusion using Latent Transparency](https://arxiv.org/abs/2402.17113)
## Quickstart
Generate transparent image with SD1.5 models. In this example, we will use [digiplay/Juggernaut_final](https://huggingface.co/digiplay/Juggernaut_final) as the base model
```python
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
import torch
from diffusers import StableDiffusionPipeline
from models import TransparentVAEDecoder
from loaders import load_lora_to_unet
if __name__ == "__main__":
model_path = hf_hub_download(
'LayerDiffusion/layerdiffusion-v1',
'layer_sd15_vae_transparent_decoder.safetensors',
)
vae_transparent_decoder = TransparentVAEDecoder.from_pretrained("digiplay/Juggernaut_final", subfolder="vae", torch_dtype=torch.float16).to("cuda")
vae_transparent_decoder.set_transparent_decoder(load_file(model_path))
pipeline = StableDiffusionPipeline.from_pretrained("digiplay/Juggernaut_final", vae=vae_transparent_decoder, torch_dtype=torch.float16, safety_checker=None).to("cuda")
model_path = hf_hub_download(
'LayerDiffusion/layerdiffusion-v1',
'layer_sd15_transparent_attn.safetensors'
)
load_lora_to_unet(pipeline.unet, model_path, frames=1)
image = pipeline(prompt="a dog sitting in room, high quality",
width=512, height=512,
num_images_per_prompt=1, return_dict=False)[0]
```