File size: 3,343 Bytes
69d3d9d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
# Video-P2P: Video Editing with Cross-attention Control
The official implementation of [Video-P2P](https://video-p2p.github.io/).

[Shaoteng Liu](https://www.shaotengliu.com/), [Yuechen Zhang](https://julianjuaner.github.io/), [Wenbo Li](https://fenglinglwb.github.io/), [Zhe Lin](https://sites.google.com/site/zhelin625/), [Jiaya Jia](https://jiaya.me/)

[![Project Website](https://img.shields.io/badge/Project-Website-orange)](https://video-p2p.github.io/)
[![arXiv](https://img.shields.io/badge/arXiv-2303.04761-b31b1b.svg)](https://arxiv.org/abs/2303.04761)

![Teaser](./docs/teaser.png)

## Changelog

- 2023.03.20 Release Gradio Demo.
- 2023.03.19 Release Code.
- 2023.03.09 Paper preprint on arxiv.

## Todo

- [x] Release the code with 6 examples.
- [x] Update a faster version.
- [x] Release all data.
- [ ] Release the Gradio Demo.
- [ ] Release more configs and new applications.

## Setup

``` bash
pip install -r requirements.txt
```

The code was tested on both Tesla V100 32GB and RTX3090 24GB.

The environment is similar to [Tune-A-video](https://github.com/showlab/Tune-A-Video) and [prompt-to-prompt](https://github.com/google/prompt-to-prompt/).

[xformers](https://github.com/facebookresearch/xformers) on 3090 may meet this [issue](https://github.com/bryandlee/Tune-A-Video/issues/4).

## Quickstart

Please replace ``pretrained_model_path'' with the path to your stable-diffusion.

``` bash
# You can minimize the tuning epochs to speed up.
python run_tuning.py  --config="configs/rabbit-jump-tune.yaml" # Tuning to do model initialization.

# We develop a faster mode (1 min on V100):
python run_videop2p.py --config="configs/rabbit-jump-p2p.yaml" --fast

# The official mode (10 mins on V100, more stable):
python run_videop2p.py --config="configs/rabbit-jump-p2p.yaml"
```

## Dataset

We release our dataset [here]().
Download them under ./data and explore your creativity!

## Results

<table class="center">
<tr>
  <td width=50% style="text-align:center;">configs/rabbit-jump-p2p.yaml</td>
  <td width=50% style="text-align:center;">configs/penguin-run-p2p.yaml</td>
</tr>
<tr>
  <td><img src="https://video-p2p.github.io/assets/rabbit.gif"></td>
  <td><img src="https://video-p2p.github.io/assets/penguin-crochet.gif"></td>
</tr>
<tr>
  <td width=50% style="text-align:center;">configs/man-motor-p2p.yaml</td>
  <td width=50% style="text-align:center;">configs/car-drive-p2p.yaml</td>
</tr>
<tr>
  <td><img src="https://video-p2p.github.io/assets/motor.gif"></td>
  <td><img src="https://video-p2p.github.io/assets/car.gif"></td>
</tr>
<tr>
  <td width=50% style="text-align:center;">configs/tiger-forest-p2p.yaml</td>
  <td width=50% style="text-align:center;">configs/bird-forest-p2p.yaml</td>
</tr>
<tr>
  <td><img src="https://video-p2p.github.io/assets/tiger.gif"></td>
  <td><img src="https://video-p2p.github.io/assets/bird-child.gif"></td>
</tr>
</table>

## Citation 
```
@misc{liu2023videop2p,
      author={Liu, Shaoteng and Zhang, Yuechen and Li, Wenbo and Lin, Zhe and Jia, Jiaya},
      title={Video-P2P: Video Editing with Cross-attention Control}, 
      journal={arXiv:2303.04761},
      year={2023},
}
``` 

## References
* prompt-to-prompt: https://github.com/google/prompt-to-prompt
* Tune-A-Video: https://github.com/showlab/Tune-A-Video
* diffusers: https://github.com/huggingface/diffusers