File size: 5,670 Bytes
923da14 e04fc78 923da14 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
---
title: Liveportrait Vid2Vid
emoji: π»
colorFrom: blue
colorTo: indigo
sdk: gradio
sdk_version: 4.38.1
app_file: app.py
pinned: false
license: apache-2.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
<h1 align="center">This is the modification of LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control for allowing video as a source </h1>
<br>
<div align="center">
<!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license-MIT-yellow'></a> -->
<a href='https://arxiv.org/pdf/2407.03168'><img src='https://img.shields.io/badge/arXiv-LivePortrait-red'></a>
<a href='https://liveportrait.github.io'><img src='https://img.shields.io/badge/Project-LivePortrait-green'></a>
<a href ='https://github.com/KwaiVGI/LivePortrait'>Official Liveportrait</a>
</div>
<br>
<p align="center">
<img src="./assets/docs/showcase2.gif" alt="showcase">
<br>
π₯ For more results, visit LivePortrait <a href="https://liveportrait.github.io/"><strong>homepage</strong></a> π₯
</p>
## π₯ Getting Started
### 1. Clone the code and prepare the environment
```bash
git clone https://github.com/KwaiVGI/LivePortrait
cd LivePortrait
# create env using conda
conda create -n LivePortrait python==3.9.18
conda activate LivePortrait
# install dependencies with pip
pip install -r requirements.txt
```
### 2. Download pretrained weights
Download the pretrained weights from HuggingFace:
```bash
# you may need to run `git lfs install` first
git clone https://huggingface.co/KwaiVGI/liveportrait pretrained_weights
```
Or, download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). We have packed all weights in one directory π. Unzip and place them in `./pretrained_weights` ensuring the directory structure is as follows:
```text
pretrained_weights
βββ insightface
β βββ models
β βββ buffalo_l
β βββ 2d106det.onnx
β βββ det_10g.onnx
βββ liveportrait
βββ base_models
β βββ appearance_feature_extractor.pth
β βββ motion_extractor.pth
β βββ spade_generator.pth
β βββ warping_module.pth
βββ landmark.onnx
βββ retargeting_models
βββ stitching_retargeting_module.pth
```
### 3. Inference π
#### Fast hands-on
```bash
python inference.py
```
If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image, and generated result.
<p align="center">
<img src="./assets/docs/inference.gif" alt="image">
</p>
Or, you can change the input by specifying the `-s` and `-d` arguments:
```bash
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
# disable pasting back to run faster
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback
# more options to see
python inference.py -h
```
For video: you can change the input by specifying the `-sd` and `-d` arguments:
```bash
python inference.py -sd assets/examples/driving/d3.mp4 -d assets/examples/driving/d0.mp4 -vd True
# disable pasting back to run faster
python inference.py -sd assets/examples/driving/d3.mp4 -d assets/examples/driving/d0.mp4 -vd True --no_flag_pasteback
```
#### Driving video auto-cropping
π To use your own driving video, we **recommend**:
- Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video`.
- Focus on the head area, similar to the example videos.
- Minimize shoulder movement.
- Make sure the first frame of driving video is a frontal face with **neutral expression**.
Below is a auto-cropping case by `--flag_crop_driving_video`:
```bash
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
```
If you find the results of auto-cropping is not well, you can modify the `--scale_crop_video`, `--vy_ratio_crop_video` options to adjust the scale and offset, or do it manually.
#### Template making
You can also use the `.pkl` file auto-generated to speed up the inference, and **protect privacy**, such as:
```bash
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl
```
**Discover more interesting results on our [Homepage](https://liveportrait.github.io)** π
### 4. Gradio interface π€
We also provide a Gradio interface for a better experience, just run by:
```bash
python app.py
```
You can specify the `--server_port`, `--share`, `--server_name` arguments to satisfy your needs!
## Acknowledgements
We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface), [LivePortrait](https://github.com/KwaiVGI/LivePortrait) repositories, for their open research and contributions.
## Citation π
```bibtex
@article{guo2024liveportrait,
title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
journal = {arXiv preprint arXiv:2407.03168},
year = {2024}
}
``` |