ShaoTengLiu
debug
69d3d9d
|
raw
history blame
No virus
3.34 kB
# Video-P2P: Video Editing with Cross-attention Control
The official implementation of [Video-P2P](https://video-p2p.github.io/).
[Shaoteng Liu](https://www.shaotengliu.com/), [Yuechen Zhang](https://julianjuaner.github.io/), [Wenbo Li](https://fenglinglwb.github.io/), [Zhe Lin](https://sites.google.com/site/zhelin625/), [Jiaya Jia](https://jiaya.me/)
[![Project Website](https://img.shields.io/badge/Project-Website-orange)](https://video-p2p.github.io/)
[![arXiv](https://img.shields.io/badge/arXiv-2303.04761-b31b1b.svg)](https://arxiv.org/abs/2303.04761)
![Teaser](./docs/teaser.png)
## Changelog
- 2023.03.20 Release Gradio Demo.
- 2023.03.19 Release Code.
- 2023.03.09 Paper preprint on arxiv.
## Todo
- [x] Release the code with 6 examples.
- [x] Update a faster version.
- [x] Release all data.
- [ ] Release the Gradio Demo.
- [ ] Release more configs and new applications.
## Setup
``` bash
pip install -r requirements.txt
```
The code was tested on both Tesla V100 32GB and RTX3090 24GB.
The environment is similar to [Tune-A-video](https://github.com/showlab/Tune-A-Video) and [prompt-to-prompt](https://github.com/google/prompt-to-prompt/).
[xformers](https://github.com/facebookresearch/xformers) on 3090 may meet this [issue](https://github.com/bryandlee/Tune-A-Video/issues/4).
## Quickstart
Please replace ``pretrained_model_path'' with the path to your stable-diffusion.
``` bash
# You can minimize the tuning epochs to speed up.
python run_tuning.py --config="configs/rabbit-jump-tune.yaml" # Tuning to do model initialization.
# We develop a faster mode (1 min on V100):
python run_videop2p.py --config="configs/rabbit-jump-p2p.yaml" --fast
# The official mode (10 mins on V100, more stable):
python run_videop2p.py --config="configs/rabbit-jump-p2p.yaml"
```
## Dataset
We release our dataset [here]().
Download them under ./data and explore your creativity!
## Results
<table class="center">
<tr>
<td width=50% style="text-align:center;">configs/rabbit-jump-p2p.yaml</td>
<td width=50% style="text-align:center;">configs/penguin-run-p2p.yaml</td>
</tr>
<tr>
<td><img src="https://video-p2p.github.io/assets/rabbit.gif"></td>
<td><img src="https://video-p2p.github.io/assets/penguin-crochet.gif"></td>
</tr>
<tr>
<td width=50% style="text-align:center;">configs/man-motor-p2p.yaml</td>
<td width=50% style="text-align:center;">configs/car-drive-p2p.yaml</td>
</tr>
<tr>
<td><img src="https://video-p2p.github.io/assets/motor.gif"></td>
<td><img src="https://video-p2p.github.io/assets/car.gif"></td>
</tr>
<tr>
<td width=50% style="text-align:center;">configs/tiger-forest-p2p.yaml</td>
<td width=50% style="text-align:center;">configs/bird-forest-p2p.yaml</td>
</tr>
<tr>
<td><img src="https://video-p2p.github.io/assets/tiger.gif"></td>
<td><img src="https://video-p2p.github.io/assets/bird-child.gif"></td>
</tr>
</table>
## Citation
```
@misc{liu2023videop2p,
author={Liu, Shaoteng and Zhang, Yuechen and Li, Wenbo and Lin, Zhe and Jia, Jiaya},
title={Video-P2P: Video Editing with Cross-attention Control},
journal={arXiv:2303.04761},
year={2023},
}
```
## References
* prompt-to-prompt: https://github.com/google/prompt-to-prompt
* Tune-A-Video: https://github.com/showlab/Tune-A-Video
* diffusers: https://github.com/huggingface/diffusers