Model Card for Model ID

Nice generations using 4 inference steps!

img

Model Details

Model Description

Based on Stable Diffusion 1.5 with tuned unet using Multi Consistency Distilation on COCO

Inference

pip install diffusers==0.30.2 peft==0.8.2 huggingface_hub==0.23.4
import torch
from diffusers import StableDiffusionPipeline, UNet2DConditionModel, DDIMScheduler
from peft import PeftModel

Base Stable Diffusion 1.5


model_id = "sd-legacy/stable-diffusion-v1-5"

pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")

# Проверяем, что все компоненты модели в FP16 и на cuda
assert pipe.unet.dtype == torch.float16 and pipe.unet.device.type =='cuda'
assert pipe.vae.dtype == torch.float16 and pipe.vae.device.type == 'cuda'
assert pipe.text_encoder.dtype == torch.float16 and pipe.text_encoder.device.type == 'cuda'

# Заменяем дефолтный сэмплер на DDIM
pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
pipe.scheduler.timesteps = pipe.scheduler.timesteps.cuda()
pipe.scheduler.alphas_cumprod = pipe.scheduler.alphas_cumprod.cuda()

Default 4 steps

generator = torch.Generator(device="cuda").manual_seed(1)

images = pipe(
    prompt="A sad puppy with large eyes",
    num_inference_steps=4,
    generator=generator,
    num_images_per_prompt=1,
    guidance_scale=1
).images

images[0]

Multi-CD UNet


unet = UNet2DConditionModel.from_pretrained(model_id, subfolder="unet")
unet.to('cuda')
assert unet.dtype == torch.float32

cm_unet = PeftModel.from_pretrained(
    unet,
    "jmpleo/cv-week-2024",
    subfolder='multi-cd',
    adapter_name="multi-cd",
)

Multi-CD 4-steps inference

pipe.unet = cm_unet.eval().to(torch.float16)
assert cm_unet.active_adapter == 'multi-cd'

generator = torch.Generator(device="cuda").manual_seed(1)

images = pipe(
    prompt="A sad puppy with large eyes",
    num_inference_steps=4,
    generator=generator,
    num_images_per_prompt=1,
    guidance_scale=1
).images

images[0]
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.