tianweiy commited on
Commit
a70a18b
2 Parent(s): c37e3bf baf3ffd

Merge branch 'main' of https://huggingface.co/tianweiy/DMD2 into main

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -22,9 +22,9 @@ Tianwei Yin [tianweiy@mit.edu](mailto:tianweiy@mit.edu)
22
 
23
  ## Huggingface Demo
24
 
25
- Our 4-step (much higher quality, 2X slower) Text-to-Image demo is hosted at [DMD2-4step](https://913f7051c61c070e4e.gradio.live)
26
 
27
- Our 1-step Text-to-Image demo is hosted at [DMD2-1step](https://154dfe6ee5c63946cc.gradio.live)
28
 
29
  ## Usage
30
 
@@ -46,7 +46,9 @@ unet.load_state_dict(torch.load(hf_hub_download(repo_name, ckpt_name), map_locat
46
  pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda")
47
  pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
48
  prompt="a photo of a cat"
49
- image=pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]
 
 
50
  ```
51
 
52
  #### 1-step generation
 
22
 
23
  ## Huggingface Demo
24
 
25
+ Our 4-step (much higher quality, 2X slower) Text-to-Image demo is hosted at [DMD2-4step](https://6cf215173601f32482.gradio.live)
26
 
27
+ Our 1-step Text-to-Image demo is hosted at [DMD2-1step](https://cc2622c0c132346c64.gradio.live)
28
 
29
  ## Usage
30
 
 
46
  pipe = DiffusionPipeline.from_pretrained(base_model_id, unet=unet, torch_dtype=torch.float16, variant="fp16").to("cuda")
47
  pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
48
  prompt="a photo of a cat"
49
+
50
+ # LCMScheduler's default timesteps are different from the one we used for training
51
+ image=pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0, timesteps=[999, 749, 499, 249]).images[0]
52
  ```
53
 
54
  #### 1-step generation