sdxl-flash-mini / README.md
ehristoforu's picture
Update README.md
d8a26d1 verified
|
raw
history blame
1.62 kB
metadata
license: creativeml-openrail-m
library_name: diffusers
pipeline_tag: text-to-image
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
  - safetensors
  - stable-diffusion
  - sdxl
  - ssd-1b
  - flash
  - sdxl-flash
  - sdxl-flash-mini
  - distilled
  - lightning
  - turbo
  - lcm
  - hyper
  - fast
  - fast-sdxl
  - sd-community
inference: false

SDXL Flash Mini in collaboration with Project Fluently

preview

Introducing the new fast model SDXL Flash (Mini), we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.

It weighs less, consumes less video memory and other resources, and the quality has not dropped much.

Steps and CFG (Guidance)

steps_and_cfg_grid_test

Optimal settings

  • Steps: 6-9
  • CFG Scale: 2.5-3.5
  • Sampler: DPM++ SDE

Diffusers usage

pip install torch diffusers
import torch
from diffusers import StableDiffusionXLPipeline, DPMSolverSinglestepScheduler
# Load model.
pipe = StableDiffusionXLPipeline.from_pretrained("sd-community/sdxl-flash-mini", torch_dtype=torch.float16, variant="fp16").to("cuda")
# Ensure sampler uses "trailing" timesteps.
pipe.scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing")
# Image generation.
pipe("a happy dog, sunny day, realism", num_inference_steps=7, guidance_scale=3).images[0].save("output.png")