File size: 17,403 Bytes
3060b7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
## FateZero: Fusing Attentions for Zero-shot Text-based Video Editing

[Chenyang Qi](https://chenyangqiqi.github.io/), [Xiaodong Cun](http://vinthony.github.io/), [Yong Zhang](https://yzhang2016.github.io), [Chenyang Lei](https://chenyanglei.github.io/), [Xintao Wang](https://xinntao.github.io/), [Ying Shan](https://scholar.google.com/citations?hl=zh-CN&user=4oXBp9UAAAAJ), and [Qifeng Chen](https://cqf.io)

<a href='https://arxiv.org/abs/2303.09535'><img src='https://img.shields.io/badge/ArXiv-2303.09535-red'></a> 
<a href='https://fate-zero-edit.github.io/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ChenyangQiQi/FateZero/blob/main/colab_fatezero.ipynb)
[![GitHub](https://img.shields.io/github/stars/ChenyangQiQi/FateZero?style=social)](https://github.com/ChenyangQiQi/FateZero)
 

<!-- ![fatezero_demo](./docs/teaser.png) -->

<table class="center">
  <td><img src="docs/gif_results/17_car_posche_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/3_sunflower_vangogh_conat_result.gif"></td>
  <tr>
  <td width=25% style="text-align:center;">"silver jeep ➜ posche car"</td>
  <td width=25% style="text-align:center;">"+ Van Gogh style"</td>
  <!-- <td width=25% style="text-align:center;">"Wonder Woman, wearing a cowboy hat, is skiing"</td>
  <td width=25% style="text-align:center;">"A man, wearing pink clothes, is skiing at sunset"</td> -->
</tr>
</table >

## Abstract
<b>TL;DR: Using FateZero, Edits your video via pretrained Diffusion models without training.</b>

<details><summary>CLICK for full abstract</summary>


> The diffusion-based generative models have achieved
remarkable success in text-based image generation. However,
since it contains enormous randomness in generation
progress, it is still challenging to apply such models for
real-world visual content editing, especially in videos. In
this paper, we propose FateZero, a zero-shot text-based editing method on real-world videos without per-prompt
training or use-specific mask. To edit videos consistently,
we propose several techniques based on the pre-trained
models. Firstly, in contrast to the straightforward DDIM
inversion technique, our approach captures intermediate
attention maps during inversion, which effectively retain
both structural and motion information. These maps are
directly fused in the editing process rather than generated
during denoising. To further minimize semantic leakage of
the source video, we then fuse self-attentions with a blending
mask obtained by cross-attention features from the source
prompt. Furthermore, we have implemented a reform of the
self-attention mechanism in denoising UNet by introducing
spatial-temporal attention to ensure frame consistency. Yet
succinct, our method is the first one to show the ability of
zero-shot text-driven video style and local attribute editing
from the trained text-to-image model. We also have a better
zero-shot shape-aware editing ability based on the text-tovideo
model. Extensive experiments demonstrate our
superior temporal consistency and editing capability than
previous works.
</details>

## Changelog
- 2023.03.27 Release [`attribute editing config`](config/attribute) and 
  <!-- [`data`](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/Ee7J2IzZuaVGkefh-ZRp1GwB7RCUYU7MVJCKqeNWmOIpfg?e=dcOwb7) -->
  [`data`](https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/attribute.zip) used in the paper.
- 2023.03.22 Upload a `colab notebook` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ChenyangQiQi/FateZero/blob/main/colab_fatezero.ipynb). Enjoy the fun of zero-shot video-editing freely!
- 2023.03.22 Release [`style editing config`](config/style) and 
  <!--[`data`](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/EaTqRAuW0eJLj0z_JJrURkcBZCC3Zvgsdo6zsXHhpyHhHQ?e=FzuiNG) -->
  [`data`](https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/style.zip)
  used in the paper.
- 2023.03.21 [Editing guidance](docs/EditingGuidance.md) is provided to help users to edit in-the-wild video. Welcome to play and give feedback!
- 2023.03.21 Update the `codebase and configuration`. Now, it can run with lower resources (16G GPU and less than 16G CPU RAM) with [new configuration](config/low_resource_teaser) in `config/low_resource_teaser`. 
<!-- A new option store all the attentions in hard disk, which require less ram. -->
- 2023.03.17 Release Code and Paper!

## Todo

- [x] Release the edit config for teaser
- [x] Memory and runtime profiling
- [x] Hands-on guidance of hyperparameters tuning
- [x] Colab
- [x] Release configs for other result and in-the-wild dataset
  <!-- - [x] Style editing: done
  - [-] Attribute editing: in progress -->
- [-] hugging-face: inprogress
- [ ] Tune-a-video optimization and shape editing configs
- [ ] Release more application

## Setup Environment
Our method is tested using cuda11, fp16 of accelerator and xformers on a single A100 or 3090.

```bash
conda create -n fatezero38 python=3.8
conda activate fatezero38

pip install -r requirements.txt
```

`xformers` is recommended for A100 GPU to save memory and running time. 

<details><summary>Click for xformers installation </summary>

We find its installation not stable. You may try the following wheel:
```bash
wget https://github.com/ShivamShrirao/xformers-wheels/releases/download/4c06c79/xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl
pip install xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl
```

</details>

Validate the installation by 
```
python test_install.py
```

Our environment is similar to Tune-A-video ([official](https://github.com/showlab/Tune-A-Video), [unofficial](https://github.com/bryandlee/Tune-A-Video))  and [prompt-to-prompt](https://github.com/google/prompt-to-prompt/). You may check them for more details.


## FateZero Editing

#### Style and Attribute Editing in Teaser

Download the [stable diffusion v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) (or other interesting image diffusion model) and put it to `./ckpt/stable-diffusion-v1-4`. 

<details><summary>Click for bash command: </summary>
 
```
mkdir ./ckpt
# download from huggingface face, takes 20G space
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
cd ./ckpt
ln -s ../stable-diffusion-v1-4 .
```
</details>

Then, you could reproduce style and shape editing result in our teaser by running:

```bash
accelerate launch test_fatezero.py --config config/teaser/jeep_watercolor.yaml
# or CUDA_VISIBLE_DEVICES=0 python test_fatezero.py --config config/teaser/jeep_watercolor.yaml
```

<details><summary>The result is saved at `./result` . (Click for directory structure) </summary>

```
result
├── teaser
│   ├── jeep_posche
│   ├── jeep_watercolor
│           ├── cross-attention  # visualization of cross-attention during inversion
│           ├── sample           # result
│           ├── train_samples    # the input video

```

</details>

Editing 8 frames on an Nvidia 3090, use `100G CPU memory, 12G GPU memory` for editing. We also provide some [`low cost setting`](config/low_resource_teaser) of style editing by different hyper-parameters on a 16GB GPU. 
You may try these low cost setting on colab.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ChenyangQiQi/FateZero/blob/main/colab_fatezero.ipynb)

More the speed and hardware benchmark [here](docs/EditingGuidance.md#ddim-hyperparameters).

#### Shape and large motion editing with Tune-A-Video

Besides style and attribution editing above, we also provide a `Tune-A-Video` [checkpoint](https://hkustconnect-my.sharepoint.com/:f:/g/personal/cqiaa_connect_ust_hk/EviSTWoAOs1EmHtqZruq50kBZu1E8gxDknCPigSvsS96uQ?e=492khj). You may download the it and move it to `./ckpt/jeep_tuned_200/`.
<!-- We provide the [Tune-a-Video](https://drive.google.com/file/d/166eNbabM6TeJVy7hxol2gL1kUGKHi3Do/view?usp=share_link), you could download the data, unzip and put it to `data`. : -->

<details><summary>The directory structure should like this: (Click for directory structure) </summary>

```
ckpt
├── stable-diffusion-v1-4
├── jeep_tuned_200
...
data
├── car-turn
│   ├── 00000000.png
│   ├── 00000001.png
│   ├── ...
video_diffusion
```
</details>

You could reproduce the shape editing result in our teaser by running:

```bash
accelerate launch test_fatezero.py --config config/teaser/jeep_posche.yaml
```


### Reproduce other results in the paper (in progress)
<!-- Download the data of [style editing](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/EaTqRAuW0eJLj0z_JJrURkcBZCC3Zvgsdo6zsXHhpyHhHQ?e=FzuiNG) and [attribute editing](https://hkustconnect-my.sharepoint.com/:u:/g/personal/cqiaa_connect_ust_hk/Ee7J2IzZuaVGkefh-ZRp1GwB7RCUYU7MVJCKqeNWmOIpfg?e=dcOwb7)
-->
Download the data of style editing and attribute editing
from [onedrive](https://hkustconnect-my.sharepoint.com/:f:/g/personal/cqiaa_connect_ust_hk/EkIeHj3CQiBNhm6iEEhJQZwBEBJNCGt3FsANmyqeAYbuXQ?e=FxYtJk) or from Github [Release](https://github.com/ChenyangQiQi/FateZero/releases/tag/v0.0.1).
<details><summary>Click for wget bash command: </summary>
 
```
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/attribute.zip
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/style.zip
```
</details>

Unzip and Place it in ['./data'](data). Then use the command in ['config/style'](config/style) and ['config/attribute'](config/attribute) to get the results.

The config of our tune-a-video ckpts will be updated latter.

## Tuning guidance to edit YOUR video
We provided a tuning guidance to edit in-the-wild video at [here](./docs/EditingGuidance.md). The work is still in progress. Welcome to give your feedback in issues.

## Style Editing Results with Stable Diffusion
We show the difference of source prompt and target prompt in the box below each video.

Note mp4 and gif files in this github page are compressed. 
Please check our [Project Page](https://fate-zero-edit.github.io/) for mp4 files of original video editing results.
<table class="center">

<tr>
  <td><img src="docs/gif_results/style/1_surf_ukiyo_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/style/2_car_watercolor_01_concat_result.gif"></td>
    <td><img src="docs/gif_results/style/6_lily_monet_01_concat_result.gif"></td>
  <!-- <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/wonder-woman.gif"></td>              
  <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/pink-sunset.gif"></td> -->
</tr>
<tr>
  <td width=25% style="text-align:center;">"+ Ukiyo-e style"</td>
  <td width=25% style="text-align:center;">"+ watercolor painting"</td>
  <td width=25% style="text-align:center;">"+ Monet style"</td>
</tr>

<tr>
  <td><img src="docs/gif_results/style/4_rabit_pokemon_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/style/5_train_shikai_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/style/7_swan_carton_01_concat_result.gif"></td>

</tr>
<tr>

</tr>
<tr>
  <td width=25% style="text-align:center;">"+ Pokémon cartoon style"</td>
  <td width=25% style="text-align:center;">"+ Makoto Shinkai style"</td>
  <td width=25% style="text-align:center;">"+ cartoon style"</td>
</tr>
</table>

## Attribute Editing Results with Stable Diffusion
<table class="center">

<tr>

  <td><img src="docs/gif_results/attri/15_rabbit_eat_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/15_rabbit_eat_02_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/15_rabbit_eat_04_concat_result.gif"></td>

</tr>
<tr>
  <td width=25% style="text-align:center;">"rabbit, strawberry ➜ white rabbit, flower"</td>
  <td width=25% style="text-align:center;">"rabbit, strawberry ➜ squirrel, carrot"</td>
  <td width=25% style="text-align:center;">"rabbit, strawberry ➜ white rabbit, leaves"</td>

</tr>
<tr>

  <td><img src="docs/gif_results/attri/16_sq_eat_04_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/16_sq_eat_02_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/16_sq_eat_03_concat_result.gif"></td>

</tr>
<tr>
  <td width=25% style="text-align:center;">"squirrel ➜ robot squirrel"</td>
  <td width=25% style="text-align:center;">"squirrel, Carrot ➜ rabbit, eggplant"</td>
  <td width=25% style="text-align:center;">"squirrel, Carrot ➜ robot mouse, screwdriver"</td>

</tr>

<tr>

  <td><img src="docs/gif_results/attri/13_bear_tiger_leopard_lion_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/13_bear_tiger_leopard_lion_02_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/13_bear_tiger_leopard_lion_03_concat_result.gif"></td>

</tr>
<tr>
  <td width=25% style="text-align:center;">"bear ➜ a red tiger"</td>
  <td width=25% style="text-align:center;">"bear ➜ a yellow leopard"</td>
  <td width=25% style="text-align:center;">"bear ➜ a brown lion"</td>

</tr>
<tr>

  <td><img src="docs/gif_results/attri/14_cat_grass_tiger_corgin_02_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/14_cat_grass_tiger_corgin_03_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/14_cat_grass_tiger_corgin_04_concat_result.gif"></td>

</tr>
<tr>
  <td width=25% style="text-align:center;">"cat ➜ black cat, grass..."</td>
  <td width=25% style="text-align:center;">"cat ➜ red tiger"</td>
  <td width=25% style="text-align:center;">"cat ➜ Shiba-Inu"</td>

</tr>

<tr>

  <td><img src="docs/gif_results/attri/10_bus_gpu_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/11_dog_robotic_corgin_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/11_dog_robotic_corgin_02_concat_result.gif"></td>

</tr>
<tr>
  <td width=25% style="text-align:center;">"bus ➜ GPU"</td>
  <td width=25% style="text-align:center;">"gray dog ➜ yellow corgi"</td>
  <td width=25% style="text-align:center;">"gray dog ➜ robotic dog"</td>

</tr>
<tr>

  <td><img src="docs/gif_results/attri/9_duck_rubber_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/12_fox_snow_wolf_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/attri/12_fox_snow_wolf_02_concat_result.gif"></td>

</tr>
<tr>
  <td width=25% style="text-align:center;">"white duck ➜ yellow rubber duck"</td>
  <td width=25% style="text-align:center;">"grass ➜ snow"</td>
  <td width=25% style="text-align:center;">"white fox ➜ grey wolf"</td>

</tr>


</table>

## Shape and large motion editing with Tune-A-Video
<table class="center">

<tr>
  <td><img src="docs/gif_results/shape/17_car_posche_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/shape/18_swan_01_concat_result.gif"></td>
    <td><img src="docs/gif_results/shape/18_swan_02_concat_result.gif"></td>
  <!-- <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/wonder-woman.gif"></td>              
  <td><img src="https://tuneavideo.github.io/assets/results/tuneavideo/man-skiing/pink-sunset.gif"></td> -->
</tr>
<tr>
  <td width=25% style="text-align:center;">"silver jeep ➜ posche car"</td>
  <td width=25% style="text-align:center;">"Swan ➜ White Duck"</td>
  <td width=25% style="text-align:center;">"Swan ➜ Pink flamingo"</td>
</tr>

<tr>
  <td><img src="docs/gif_results/shape/19_man_wonder_01_concat_result.gif"></td>
  <td><img src="docs/gif_results/shape/19_man_wonder_02_concat_result.gif"></td>
  <td><img src="docs/gif_results/shape/19_man_wonder_03_concat_result.gif"></td>

</tr>
<tr>

</tr>
<tr>
  <td width=25% style="text-align:center;">"A man ➜ A Batman"</td>
  <td width=25% style="text-align:center;">"A man ➜ A Wonder Woman, With cowboy hat"</td>
  <td width=25% style="text-align:center;">"A man ➜ A Spider-Man"</td>
</tr>
</table>


## Demo Video

https://user-images.githubusercontent.com/45789244/225698509-79c14793-3153-4bba-9d6e-ede7d811d7f8.mp4

The video here is compressed due to the size limit of github.
The original full resolution video is [here](https://hkustconnect-my.sharepoint.com/:v:/g/personal/cqiaa_connect_ust_hk/EXKDI_nahEhKtiYPvvyU9SkBDTG2W4G1AZ_vkC7ekh3ENw?e=Xhgtmk).


## Citation 

```
@misc{qi2023fatezero,
      title={FateZero: Fusing Attentions for Zero-shot Text-based Video Editing}, 
      author={Chenyang Qi and Xiaodong Cun and Yong Zhang and Chenyang Lei and Xintao Wang and Ying Shan and Qifeng Chen},
      year={2023},
      eprint={2303.09535},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
``` 


## Acknowledgements

This repository borrows heavily from [Tune-A-Video](https://github.com/showlab/Tune-A-Video) and [prompt-to-prompt](https://github.com/google/prompt-to-prompt/). thanks the authors for sharing their code and models.

## Maintenance

This is the codebase for our research work. We are still working hard to update this repo and more details are coming in days. If you have any questions or ideas to discuss, feel free to contact [Chenyang Qi](cqiaa@connect.ust.hk) or [Xiaodong Cun](vinthony@gmail.com).