Update README.md
Browse files
README.md
CHANGED
@@ -10,36 +10,31 @@ inference: false
|
|
10 |
|
11 |
# Trajectory Consistency Distillation
|
12 |
|
13 |
-
[
|
14 |
-
[
|
15 |
-
[](https://github.com/jabir-zheng/TCD)
|
16 |
-
[]()
|
17 |
|
18 |
-
|
19 |
|
20 |

|
21 |
|
22 |
-
## News
|
23 |
-
- (🔥New) 2024/2/28 We provided a demo of TCD on 🤗 Hugging Face Space. Try it out [here]().
|
24 |
-
- (🔥New) 2024/2/28 We released our model [TCD-SDXL-Lora]() in 🤗 Hugging Face.
|
25 |
-
- (🔥New) 2024/2/28 Please refer to the [Usage](#usage-anchor) for more information with Diffusers Pipeline.
|
26 |
-
|
27 |
## Introduction
|
28 |
|
29 |
TCD, inspired by [Consistency Models](https://arxiv.org/abs/2303.01469), is a novel distillation technology that enables the distillation of knowledge from pre-trained diffusion models into a few-step sampler. In this repository, we release the inference code and our model named TCD-SDXL, which is distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). We provide the LoRA checkpoint in this [repository]().
|
30 |
|
|
|
|
|
31 |
✨TCD has following advantages:
|
32 |
|
33 |
-
- `
|
34 |
-
|
35 |
-
|
36 |
- `Versatility`: Integrated with LoRA technology, TCD can be directly applied to various models (including the custom Community Models, styled LoRA, ControlNet, IP-Adapter) that share the same backbone, as demonstrated in the [Usage](#usage-anchor).
|
37 |

|
38 |
- `Avoiding Mode Collapse`: TCD achieves few-step generation without the need for adversarial training, thus circumventing mode collapse caused by the GAN objective.
|
39 |
In contrast to the concurrent work [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning), which relies on Adversarial Diffusion Distillation, TCD can synthesize results that are more realistic and slightly more diverse, without the presence of "Janus" artifacts.
|
40 |

|
41 |
|
42 |
-
For more information, please refer to our paper [Trajectory Consistency Distillation]().
|
43 |
|
44 |
<a id="usage-anchor"></a>
|
45 |
|
@@ -318,6 +313,14 @@ grid_image = make_image_grid([ref_image, image], rows=1, cols=2)
|
|
318 |
|
319 |
## Citation
|
320 |
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
321 |
```
|
322 |
|
323 |
## Acknowledgments
|
|
|
10 |
|
11 |
# Trajectory Consistency Distillation
|
12 |
|
13 |
+
Official Model Repo of the paper: [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159).
|
14 |
+
For more information, please check the [GitHub Repo](https://github.com/jabir-zheng/TCD) and [Project Page](https://mhh0318.github.io/tcd/).
|
|
|
|
|
15 |
|
16 |
+
Also welcome to try the demo host on [🤗 Space(https://huggingface.co/spaces/h1t/TCD)].
|
17 |
|
18 |

|
19 |
|
|
|
|
|
|
|
|
|
|
|
20 |
## Introduction
|
21 |
|
22 |
TCD, inspired by [Consistency Models](https://arxiv.org/abs/2303.01469), is a novel distillation technology that enables the distillation of knowledge from pre-trained diffusion models into a few-step sampler. In this repository, we release the inference code and our model named TCD-SDXL, which is distilled from [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). We provide the LoRA checkpoint in this [repository]().
|
23 |
|
24 |
+

|
25 |
+
|
26 |
✨TCD has following advantages:
|
27 |
|
28 |
+
- `Flexible NFEs`: For TCD, the NFEs can be varied at will (compared with Turbo), without adversely affecting the quality of the results (compared with LCMs), where LCM experiences a notable decline in quality at high NFEs.
|
29 |
+
- `Better than Teacher`: TCD maintains superior generative quality at high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL. It is worth noting that there is no additional discriminator or LPIPS supervision included during training.
|
30 |
+
- `Freely Change the Detailing`: During inference, the level of detail in the image can be simply modified by adjusing one hyper-parameter gamma. This option does not require the introduction of any additional parameters.
|
31 |
- `Versatility`: Integrated with LoRA technology, TCD can be directly applied to various models (including the custom Community Models, styled LoRA, ControlNet, IP-Adapter) that share the same backbone, as demonstrated in the [Usage](#usage-anchor).
|
32 |

|
33 |
- `Avoiding Mode Collapse`: TCD achieves few-step generation without the need for adversarial training, thus circumventing mode collapse caused by the GAN objective.
|
34 |
In contrast to the concurrent work [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning), which relies on Adversarial Diffusion Distillation, TCD can synthesize results that are more realistic and slightly more diverse, without the presence of "Janus" artifacts.
|
35 |

|
36 |
|
37 |
+
For more information, please refer to our paper [Trajectory Consistency Distillation](https://arxiv.org/abs/2402.19159).
|
38 |
|
39 |
<a id="usage-anchor"></a>
|
40 |
|
|
|
313 |
|
314 |
## Citation
|
315 |
```bibtex
|
316 |
+
@misc{zheng2024trajectory,
|
317 |
+
title={Trajectory Consistency Distillation},
|
318 |
+
author={Jianbin Zheng and Minghui Hu and Zhongyi Fan and Chaoyue Wang and Changxing Ding and Dacheng Tao and Tat-Jen Cham},
|
319 |
+
year={2024},
|
320 |
+
eprint={2402.19159},
|
321 |
+
archivePrefix={arXiv},
|
322 |
+
primaryClass={cs.CV}
|
323 |
+
}
|
324 |
```
|
325 |
|
326 |
## Acknowledgments
|