h1t
/

Text-to-Image
Diffusers
lora
h1t commited on
Commit
9606b47
1 Parent(s): 94ba1b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -14
README.md CHANGED
@@ -10,36 +10,31 @@ inference: false
10
 
11
  # Trajectory Consistency Distillation
12
 
13
- [![Arxiv](https://img.shields.io/badge/arXiv-2402.xxxxx-b31b1b)]()
14
- [![Project page](https://img.shields.io/badge/Web-Project%20Page-green)](https://mhh0318.github.io/tcd/)
15
- [![Github](https://img.shields.io/badge/Github-Repo-yellow?logo=github)](https://github.com/jabir-zheng/TCD)
16
- [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97HuggingFace-Space-blue)]()
17
 
18
- Official Repository of the paper: [Trajectory Consistency Distillation]()
19
 
20
  ![](./assets/teaser_fig.png)
21
 
22
- ## News
23
- - (🔥New) 2024/2/28 We provided a demo of TCD on 🤗 Hugging Face Space. Try it out [here]().
24
- - (🔥New) 2024/2/28 We released our model [TCD-SDXL-Lora]() in 🤗 Hugging Face.
25
- - (🔥New) 2024/2/28 Please refer to the [Usage](#usage-anchor) for more information with Diffusers Pipeline.
26
-
27
  ## Introduction
28
 
29
  TCD, inspired by [Consistency Models](https://arxiv.org/abs/2303.01469), is a novel distillation technology that enables the distillation of knowledge from pre-trained diffusion models into a few-step sampler. In this repository, we release the inference code and our model named TCD-SDXL, which is distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). We provide the LoRA checkpoint in this [repository]().
30
 
 
 
31
  ✨TCD has following advantages:
32
 
33
- - `High-Quality with Few-Step`: TCD significantly surpasses the previous state-of-the-art few-step text-to-image model [LCM](https://github.com/luosiallen/latent-consistency-model/tree/main) in terms of image quality. Notably, LCM experiences a notable decline in quality at high NFEs. In contrast, _**TCD maintains superior generative quality at high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL**_.
34
- ![](./assets/teaser.jpeg)
35
- <!-- We observed that the images generated with 8 steps by TCD-SDXL are already highly impressive, even outperforming the original SDXL 50-steps generation results. -->
36
  - `Versatility`: Integrated with LoRA technology, TCD can be directly applied to various models (including the custom Community Models, styled LoRA, ControlNet, IP-Adapter) that share the same backbone, as demonstrated in the [Usage](#usage-anchor).
37
  ![](./assets/versatility.png)
38
  - `Avoiding Mode Collapse`: TCD achieves few-step generation without the need for adversarial training, thus circumventing mode collapse caused by the GAN objective.
39
  In contrast to the concurrent work [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning), which relies on Adversarial Diffusion Distillation, TCD can synthesize results that are more realistic and slightly more diverse, without the presence of "Janus" artifacts.
40
  ![](./assets/compare_sdxl_lightning.png)
41
 
42
- For more information, please refer to our paper [Trajectory Consistency Distillation]().
43
 
44
  <a id="usage-anchor"></a>
45
 
@@ -318,6 +313,14 @@ grid_image = make_image_grid([ref_image, image], rows=1, cols=2)
318
 
319
  ## Citation
320
  ```bibtex
 
 
 
 
 
 
 
 
321
  ```
322
 
323
  ## Acknowledgments
 
10
 
11
  # Trajectory Consistency Distillation
12
 
13
+ Official Model Repo of the paper: [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159).
14
+ For more information, please check the [GitHub Repo](https://github.com/jabir-zheng/TCD) and [Project Page](https://mhh0318.github.io/tcd/).
 
 
15
 
16
+ Also welcome to try the demo host on [🤗 Space(https://huggingface.co/spaces/h1t/TCD)].
17
 
18
  ![](./assets/teaser_fig.png)
19
 
 
 
 
 
 
20
  ## Introduction
21
 
22
  TCD, inspired by [Consistency Models](https://arxiv.org/abs/2303.01469), is a novel distillation technology that enables the distillation of knowledge from pre-trained diffusion models into a few-step sampler. In this repository, we release the inference code and our model named TCD-SDXL, which is distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). We provide the LoRA checkpoint in this [repository]().
23
 
24
+ ![](./assets/teaser.jpeg)
25
+
26
  ✨TCD has following advantages:
27
 
28
+ - `Flexible NFEs`: For TCD, the NFEs can be varied at will (compared with Turbo), without adversely affecting the quality of the results (compared with LCMs), where LCM experiences a notable decline in quality at high NFEs.
29
+ - `Better than Teacher`: TCD maintains superior generative quality at high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL. It is worth noting that there is no additional discriminator or LPIPS supervision included during training.
30
+ - `Freely Change the Detailing`: During inference, the level of detail in the image can be simply modified by adjusing one hyper-parameter gamma. This option does not require the introduction of any additional parameters.
31
  - `Versatility`: Integrated with LoRA technology, TCD can be directly applied to various models (including the custom Community Models, styled LoRA, ControlNet, IP-Adapter) that share the same backbone, as demonstrated in the [Usage](#usage-anchor).
32
  ![](./assets/versatility.png)
33
  - `Avoiding Mode Collapse`: TCD achieves few-step generation without the need for adversarial training, thus circumventing mode collapse caused by the GAN objective.
34
  In contrast to the concurrent work [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning), which relies on Adversarial Diffusion Distillation, TCD can synthesize results that are more realistic and slightly more diverse, without the presence of "Janus" artifacts.
35
  ![](./assets/compare_sdxl_lightning.png)
36
 
37
+ For more information, please refer to our paper [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159).
38
 
39
  <a id="usage-anchor"></a>
40
 
 
313
 
314
  ## Citation
315
  ```bibtex
316
+ @misc{zheng2024trajectory,
317
+ title={Trajectory Consistency Distillation},
318
+ author={Jianbin Zheng and Minghui Hu and Zhongyi Fan and Chaoyue Wang and Changxing Ding and Dacheng Tao and Tat-Jen Cham},
319
+ year={2024},
320
+ eprint={2402.19159},
321
+ archivePrefix={arXiv},
322
+ primaryClass={cs.CV}
323
+ }
324
  ```
325
 
326
  ## Acknowledgments