GuanjieChen
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,10 @@ base_model:
|
|
7 |
- maxin-cn/Latte-1
|
8 |
- facebook/DiT-XL-2-256
|
9 |
- Tencent-Hunyuan/HunyuanDiT
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
# Accelerating Vision Diffusion Transformers with Skip Branches
|
12 |
|
@@ -40,7 +44,7 @@ Pretrained text-to-image Model of [HunYuan-DiT](https://github.com/Tencent/Hunyu
|
|
40 |
(Results of HunYuan-DiT with skip-branches on text-to-image task. Latency is measured on one A100.)
|
41 |
|
42 |
### Acknowledgement
|
43 |
-
Skip-DiT has been greatly inspired by the following amazing works and teams: [DeepCache](https://
|
44 |
|
45 |
### Visualization
|
46 |
#### Text-to-Video
|
|
|
7 |
- maxin-cn/Latte-1
|
8 |
- facebook/DiT-XL-2-256
|
9 |
- Tencent-Hunyuan/HunyuanDiT
|
10 |
+
tags:
|
11 |
+
- video
|
12 |
+
- image
|
13 |
+
- model-efficiency
|
14 |
---
|
15 |
# Accelerating Vision Diffusion Transformers with Skip Branches
|
16 |
|
|
|
44 |
(Results of HunYuan-DiT with skip-branches on text-to-image task. Latency is measured on one A100.)
|
45 |
|
46 |
### Acknowledgement
|
47 |
+
Skip-DiT has been greatly inspired by the following amazing works and teams: [DeepCache](https://github.com/horseee/DeepCache), [Latte](https://github.com/Vchitect/Latte), [DiT](https://github.com/facebookresearch/DiT), and [HunYuan-DiT](https://github.com/Tencent/HunyuanDiT), we thank all the contributors for open-sourcing.
|
48 |
|
49 |
### Visualization
|
50 |
#### Text-to-Video
|