--- license: openrail++ library_name: diffusers tags: - lora - text-to-image - stable-diffusion --- # Hyper-SD Official Repository of the paper: *[Hyper-SD](https://arxiv.org/abs/2310.04378)*. Project Page: https://hyper-sd.github.io/ ![](./hypersd_tearser.jpg) ## Try our Hugging Face demos: Hyper-SD Scribble demo host on [🤗 scribble](https://huggingface.co/spaces/ByteDance/Hyper-SD15-Scribble) Hyper-SDXL One-step Text-to-Image demo host on [🤗 T2I](https://huggingface.co/spaces/ByteDance/Hyper-SDXL-1Step-T2I) ## Introduction Hyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques. In this repository, we release the models distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)。 ## Checkpoints * `Hyper-SDXL-Nstep-lora.safetensors`: Lora checkpoint, for SDXL-related models. * `Hyper-SD15-Nstep-lora.safetensors`: Lora checkpoint, for SD1.5-related models. * `Hyper-SDXL-1step-unet.safetensors`: Unet checkpoint distilled from SDXL-Base. ## SDXL-related models Usage ### 2-Steps, 4-Steps, 8-steps LoRA ```python import torch from diffusers import DiffusionPipeline, DDIMScheduler from huggingface_hub import hf_hub_download base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" repo_name = "ByteDance/Hyper-SD" # Take 2-steps lora as an example ckpt_name = "Hyper-SDXL-2steps-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Ensure ddim scheduler timestep spacing set as trailing pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") # lower eta results in more detail prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=2, guidance_scale=0).images[0] ``` ### Unified LoRA ```python import torch from diffusers import DiffusionPipeline, TCDScheduler from huggingface_hub import hf_hub_download base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" repo_name = "ByteDance/Hyper-SD" ckpt_name = "Hyper-SDXL-1step-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Use TCD scheduler to achieve better image quality pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # lower eta results in more detail eta=1.0 prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, eta=eta).images[0] ``` ### 1-step SDXL Unet ```python import torch from diffusers import DiffusionPipeline, UNet2DConditionModel, LCMScheduler from huggingface_hub import hf_hub_download from safetensors.torch import load_file base_model_id = "stabilityai/stable-diffusion-xl-base-1.0" repo_name = "ByteDance/Hyper-SD" ckpt_name = "Hyper-SDXL-1step-Unet.safetensors" # Load model. unet = UNet2DConditionModel.from_config(base_model_id, subfolder="unet").to("cuda", torch.float16) unet.load_state_dict(load_file(hf_hub_download(repo_name, ckpt_name), device="cuda")) pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda") # Use LCM scheduler instead of ddim scheduler to support specific timestep number inputs pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) # Set start timesteps to 800 in the one-step inference to get better results prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, timesteps=[800]).images[0] ``` ## SD1.5-related models Usage ### 2-Steps, 4-Steps, 8-steps LoRA ```python import torch from diffusers import DiffusionPipeline, DDIMScheduler from huggingface_hub import hf_hub_download base_model_id = "stabilityai/stable-diffusion-v1-5" repo_name = "ByteDance/Hyper-SD" # Take 2-steps lora as an example ckpt_name = "Hyper-SD15-2steps-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Ensure ddim scheduler timestep spacing set as trailing pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=2, guidance_scale=0).images[0] ``` ### Unified LoRA ```python import torch from diffusers import DiffusionPipeline, TCDScheduler from huggingface_hub import hf_hub_download base_model_id = "stabilityai/stable-diffusion-v1-5" repo_name = "ByteDance/Hyper-SD" ckpt_name = "Hyper-SD15-1step-lora.safetensors" # Load model. pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to("cuda") pipe.load_lora_weights(hf_hub_download(repo_name, ckpt_name)) pipe.fuse_lora() # Use TCD scheduler to achieve better image quality pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config) # Lower eta results in more detail eta=1.0 prompt="a photo of a cat" image=pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0, eta=eta).images[0] ``` ## Citation ```bibtex @article{ren2024hypersd, title={Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis}, author={Ren Yuxi, Xia Xin, Lu Yanzuo, Jiacheng, Wu Jie, Xie Pan, Wang Xin, Xiao Xuefeng}, year={2024}, journal={arXiv:2404.03407}, } ```