--- library_name: diffusers tags: - text-to-image license: apache-2.0 inference: false --- # Sub-path Linear Approximation Model (SLAM): SD1.5 Paper: [https://arxiv.org/abs/2404.13903](https://arxiv.org/abs/2404.13903)
Project Page: [https://subpath-linear-approx-model.github.io/](https://subpath-linear-approx-model.github.io/)
The checkpoint is a distilled from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) with our proposed Sub-path Linear Approximation Model, which reduces the number of inference steps to only between 2-4 steps. ## Usage First, install the latest version of the Diffusers library as well as peft, accelerate and transformers. ```bash pip install --upgrade pip pip install --upgrade diffusers transformers accelerate peft ``` We implement SLAM to be compatible with [LCMScheduler](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler). You can use SLAM just like you use LCM, with guidance_scale set to 1 constantly. ```python from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("alimama-creative/slam-sd1.5") # To save GPU memory, torch.float16 can be used, but it may compromise image quality. pipe.to(torch_device="cuda", torch_dtype=torch.float16) prompt = "a painting of a majestic kingdom with towering castles, lush gardens, ice and snow world" num_inference_steps = 2 images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=1, lcm_origin_steps=50, output_type="pil").images ``` ![castle2_slam_step4.png](https://intranetproxy.alipay.com/skylark/lark/0/2024/png/102756509/1714305791356-5ba636a5-8435-4c90-84f3-f06163ebab51.png#clientId=uaea4a13b-3c46-4&from=ui&height=389&id=ue56980e1&originHeight=512&originWidth=512&originalType=binary&ratio=2&rotation=0&showTitle=false&size=518096&status=done&style=none&taskId=ue11b546c-420c-4d47-b87e-13084b19902&title=&width=389)