patrickvonplaten commited on
Commit
ff2292f
1 Parent(s): 0fa535e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -2,7 +2,6 @@
2
  library_name: diffusers
3
  base_model: stabilityai/stable-diffusion-xl-base-1.0
4
  tags:
5
- - lora
6
  - text-to-image
7
  license: openrail++
8
  inference: false
@@ -31,7 +30,7 @@ pip install --upgrade diffusers transformers accelerate peft
31
 
32
  ### Text-to-Image
33
 
34
- The adapter can be loaded with it's base model `stabilityai/stable-diffusion-xl-base-1.0`. Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps.
35
  Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0.
36
 
37
  ```python
@@ -41,7 +40,7 @@ unet = UNet2DConditionModel.from_pretrained("latent-consistency/lcm-sdxl", torch
41
  pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16")
42
 
43
  pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config)
44
- pipe.set_progress_bar_config(disable=None)
45
 
46
  prompt = "a red car standing on the side of the street"
47
 
 
2
  library_name: diffusers
3
  base_model: stabilityai/stable-diffusion-xl-base-1.0
4
  tags:
 
5
  - text-to-image
6
  license: openrail++
7
  inference: false
 
30
 
31
  ### Text-to-Image
32
 
33
+ The model can be loaded with it's base pipeline `stabilityai/stable-diffusion-xl-base-1.0`. Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps.
34
  Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0.
35
 
36
  ```python
 
40
  pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16")
41
 
42
  pipe.scheduler = LCMScheduler.from_config(sd_pipe.scheduler.config)
43
+ pipe.to("cuda")
44
 
45
  prompt = "a red car standing on the side of the street"
46