Spaces:
Runtime error
Runtime error
kevinwang676
commited on
Commit
β’
2976dfc
1
Parent(s):
fb4fac3
Update README.md
Browse files
README.md
CHANGED
@@ -1,117 +1,13 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
* [Hunyuan-DiT](https://github.com/Tencent/HunyuanDiT)
|
15 |
-
* [RIFE](https://github.com/hzwer/ECCV2022-RIFE)
|
16 |
-
* [ESRGAN](https://github.com/xinntao/ESRGAN)
|
17 |
-
* [Ip-Adapter](https://github.com/tencent-ailab/IP-Adapter)
|
18 |
-
* [AnimateDiff](https://github.com/guoyww/animatediff/)
|
19 |
-
* [ControlNet](https://github.com/lllyasviel/ControlNet)
|
20 |
-
* [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
|
21 |
-
* [Stable Diffusion](https://huggingface.co/runwayml/stable-diffusion-v1-5)
|
22 |
-
|
23 |
-
## News
|
24 |
-
|
25 |
-
|
26 |
-
- **June 21, 2024.** π₯π₯π₯ We propose ExVideo, a post-tuning technique aimed at enhancing the capability of video generation models. We have extended Stable Video Diffusion to achieve the generation of long videos up to 128 frames.
|
27 |
-
- [Project Page](https://ecnu-cilab.github.io/ExVideoProjectPage/)
|
28 |
-
- Source code is released in this repo. See [`examples/ExVideo`](./examples/ExVideo/).
|
29 |
-
- Models are released on [HuggingFace](https://huggingface.co/ECNU-CILab/ExVideo-SVD-128f-v1) and [ModelScope](https://modelscope.cn/models/ECNU-CILab/ExVideo-SVD-128f-v1).
|
30 |
-
- Technical report is released on [arXiv](https://arxiv.org/abs/2406.14130).
|
31 |
-
- You can try ExVideo in this [Demo](https://huggingface.co/spaces/modelscope/ExVideo-SVD-128f-v1)!
|
32 |
-
|
33 |
-
- **June 13, 2024.** DiffSynth Studio is transferred to ModelScope. The developers have transitioned from "I" to "we". Of course, I will still participate in development and maintenance.
|
34 |
-
|
35 |
-
- **Jan 29, 2024.** We propose Diffutoon, a fantastic solution for toon shading.
|
36 |
-
- [Project Page](https://ecnu-cilab.github.io/DiffutoonProjectPage/)
|
37 |
-
- The source codes are released in this project.
|
38 |
-
- The technical report (IJCAI 2024) is released on [arXiv](https://arxiv.org/abs/2401.16224).
|
39 |
-
|
40 |
-
- **Dec 8, 2023.** We decide to develop a new Project, aiming to release the potential of diffusion models, especially in video synthesis. The development of this project is started.
|
41 |
-
|
42 |
-
- **Nov 15, 2023.** We propose FastBlend, a powerful video deflickering algorithm.
|
43 |
-
- The sd-webui extension is released on [GitHub](https://github.com/Artiprocher/sd-webui-fastblend).
|
44 |
-
- Demo videos are shown on Bilibili, including three tasks.
|
45 |
-
- [Video deflickering](https://www.bilibili.com/video/BV1d94y1W7PE)
|
46 |
-
- [Video interpolation](https://www.bilibili.com/video/BV1Lw411m71p)
|
47 |
-
- [Image-driven video rendering](https://www.bilibili.com/video/BV1RB4y1Z7LF)
|
48 |
-
- The technical report is released on [arXiv](https://arxiv.org/abs/2311.09265).
|
49 |
-
- An unofficial ComfyUI extension developed by other users is released on [GitHub](https://github.com/AInseven/ComfyUI-fastblend).
|
50 |
-
|
51 |
-
- **Oct 1, 2023.** We release an early version of this project, namely FastSDXL. A try for building a diffusion engine.
|
52 |
-
- The source codes are released on [GitHub](https://github.com/Artiprocher/FastSDXL).
|
53 |
-
- FastSDXL includes a trainable OLSS scheduler for efficiency improvement.
|
54 |
-
- The original repo of OLSS is [here](https://github.com/alibaba/EasyNLP/tree/master/diffusion/olss_scheduler).
|
55 |
-
- The technical report (CIKM 2023) is released on [arXiv](https://arxiv.org/abs/2305.14677).
|
56 |
-
- A demo video is shown on [Bilibili](https://www.bilibili.com/video/BV1w8411y7uj).
|
57 |
-
- Since OLSS requires additional training, we don't implement it in this project.
|
58 |
-
|
59 |
-
- **Aug 29, 2023.** We propose DiffSynth, a video synthesis framework.
|
60 |
-
- [Project Page](https://ecnu-cilab.github.io/DiffSynth.github.io/).
|
61 |
-
- The source codes are released in [EasyNLP](https://github.com/alibaba/EasyNLP/tree/master/diffusion/DiffSynth).
|
62 |
-
- The technical report (ECML PKDD 2024) is released on [arXiv](https://arxiv.org/abs/2308.03463).
|
63 |
-
|
64 |
-
|
65 |
-
## Installation
|
66 |
-
|
67 |
-
```
|
68 |
-
git clone https://github.com/modelscope/DiffSynth-Studio.git
|
69 |
-
cd DiffSynth-Studio
|
70 |
-
pip install -e .
|
71 |
-
```
|
72 |
-
|
73 |
-
## Usage (in Python code)
|
74 |
-
|
75 |
-
The Python examples are in [`examples`](./examples/). We provide an overview here.
|
76 |
-
|
77 |
-
### Long Video Synthesis
|
78 |
-
|
79 |
-
We trained an extended video synthesis model, which can generate 128 frames. [`examples/ExVideo`](./examples/ExVideo/)
|
80 |
-
|
81 |
-
https://github.com/modelscope/DiffSynth-Studio/assets/35051019/d97f6aa9-8064-4b5b-9d49-ed6001bb9acc
|
82 |
-
|
83 |
-
### Image Synthesis
|
84 |
-
|
85 |
-
Generate high-resolution images, by breaking the limitation of diffusion models! [`examples/image_synthesis`](./examples/image_synthesis/).
|
86 |
-
|
87 |
-
LoRA fine-tuning is supported in [`examples/train`](./examples/train/).
|
88 |
-
|
89 |
-
|Model|Example|
|
90 |
-
|-|-|
|
91 |
-
|Stable Diffusion|![1024](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/6fc84611-8da6-4a1f-8fee-9a34eba3b4a5)|
|
92 |
-
|Stable Diffusion XL|![1024](https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/67687748-e738-438c-aee5-96096f09ac90)|
|
93 |
-
|Stable Diffusion 3|![image_1024](https://github.com/modelscope/DiffSynth-Studio/assets/35051019/4df346db-6f91-420a-b4c1-26e205376098)|
|
94 |
-
|Kolors|![image_1024](https://github.com/modelscope/DiffSynth-Studio/assets/35051019/53ef6f41-da11-4701-8665-9f64392607bf)|
|
95 |
-
|Hunyuan-DiT|![image_1024](https://github.com/modelscope/DiffSynth-Studio/assets/35051019/60b022c8-df3f-4541-95ab-bf39f2fa8bb5)|
|
96 |
-
|
97 |
-
### Toon Shading
|
98 |
-
|
99 |
-
Render realistic videos in a flatten style and enable video editing features. [`examples/Diffutoon`](./examples/Diffutoon/)
|
100 |
-
|
101 |
-
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd
|
102 |
-
|
103 |
-
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c
|
104 |
-
|
105 |
-
### Video Stylization
|
106 |
-
|
107 |
-
Video stylization without video models. [`examples/diffsynth`](./examples/diffsynth/)
|
108 |
-
|
109 |
-
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea
|
110 |
-
|
111 |
-
## Usage (in WebUI)
|
112 |
-
|
113 |
-
```
|
114 |
-
python -m streamlit run DiffSynth_Studio.py
|
115 |
-
```
|
116 |
-
|
117 |
-
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/93085557-73f3-4eee-a205-9829591ef954
|
|
|
1 |
+
---
|
2 |
+
title: Diffutoon
|
3 |
+
emoji: π
|
4 |
+
colorFrom: green
|
5 |
+
colorTo: yellow
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 4.39.0
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
license: mit
|
11 |
+
---
|
12 |
+
|
13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|