h1t commited on
Commit
449a99f
1 Parent(s): f50c207

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - text-to-image
4
+ - stable-diffusion
5
+ - lora
6
+ - diffusers
7
+ base_model: runwayml/stable-diffusion-v1-5
8
+ license: mit
9
+ library_name: diffusers
10
+ ---
11
+ # Model description
12
+
13
+ Official TCD LoRA for Stable Diffusion v1.5 of the paper [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159).
14
+
15
+ For more usage please found at [Project Page](https://mhh0318.github.io/tcd/)
16
+
17
+ Here is a simple example:
18
+ `
19
+ ```python
20
+ import torch
21
+ from diffusers import StableDiffusionPipeline, TCDScheduler
22
+ device = "cuda"
23
+ base_model_id = "runwayml/stable-diffusion-v1-5"
24
+ tcd_lora_id = "h1t/TCD-SD15-LoRA"
25
+ pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16, variant="fp16").to(device)
26
+ pipe.scheduler = TCDScheduler.from_config(pipe.scheduler.config)
27
+ pipe.load_lora_weights(tcd_lora_id)
28
+ pipe.fuse_lora()
29
+ prompt = "Beautiful woman, bubblegum pink, lemon yellow, minty blue, futuristic, high-detail, epic composition, watercolor."
30
+ image = pipe(
31
+ prompt=prompt,
32
+ num_inference_steps=4,
33
+ guidance_scale=0,
34
+ # Eta (referred to as `gamma` in the paper) is used to control the stochasticity in every step.
35
+ # A value of 0.3 often yields good results.
36
+ # We recommend using a higher eta when increasing the number of inference steps.
37
+ eta=0.3,
38
+ generator=torch.Generator(device=device).manual_seed(42),
39
+ ).images[0]
40
+ ```
41
+
42
+ ![](assets/result.png)
assets/result.png ADDED
pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaecb24a1cda4411eab67275b1d991071216ac93693e8fa0c9226c9df0386232
3
+ size 134621556