benjamin-paine
commited on
Commit
•
aad8056
1
Parent(s):
63e556c
Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,67 @@ First, install the AniPortrait package into your python environment. If you're c
|
|
25 |
pip install git+https://github.com/painebenjamin/aniportrait.git
|
26 |
```
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
```py
|
31 |
from aniportrait import AniPortraitPipeline
|
|
|
25 |
pip install git+https://github.com/painebenjamin/aniportrait.git
|
26 |
```
|
27 |
|
28 |
+
## Command-Line
|
29 |
+
|
30 |
+
A command-line utilitiy `aniportrait` is installed with the package.
|
31 |
+
|
32 |
+
```sh
|
33 |
+
Usage: aniportrait [OPTIONS] INPUT_IMAGE
|
34 |
+
|
35 |
+
Run AniPortrait on an input image with a video, and/or audio file. - When
|
36 |
+
only a video file is provided, a video-to-video (face reenactment) animation
|
37 |
+
is performed. - When only an audio file is provided, an audio-to-video (lip-
|
38 |
+
sync) animation is performed. - When both a video and audio file are
|
39 |
+
provided, a video-to-video animation is performed with the audio as guidance
|
40 |
+
for the face and mouth movements.
|
41 |
+
|
42 |
+
Options:
|
43 |
+
-v, --video FILE Video file to drive the animation.
|
44 |
+
-a, --audio FILE Audio file to drive the animation.
|
45 |
+
-fps, --frame-rate INTEGER Video FPS. Also controls the sampling rate
|
46 |
+
of the audio. Will default to the video FPS
|
47 |
+
if a video file is provided, or 30 if not.
|
48 |
+
-cfg, --guidance-scale FLOAT Guidance scale for the diffusion process.
|
49 |
+
[default: 3.5]
|
50 |
+
-ns, --num-inference-steps INTEGER
|
51 |
+
Number of diffusion steps. [default: 20]
|
52 |
+
-cf, --context-frames INTEGER Number of context frames to use. [default:
|
53 |
+
16]
|
54 |
+
-co, --context-overlap INTEGER Number of context frames to overlap.
|
55 |
+
[default: 4]
|
56 |
+
-nf, --num-frames INTEGER An explicit number of frames to use. When
|
57 |
+
not passed, use the length of the audio or
|
58 |
+
video
|
59 |
+
-s, --seed INTEGER Random seed.
|
60 |
+
-w, --width INTEGER Output video width. Defaults to the input
|
61 |
+
image width.
|
62 |
+
-h, --height INTEGER Output video height. Defaults to the input
|
63 |
+
image height.
|
64 |
+
-m, --model TEXT HuggingFace model name.
|
65 |
+
-nh, --no-half Do not use half precision.
|
66 |
+
-g, --gpu-id INTEGER GPU ID to use.
|
67 |
+
-sf, --single-file Download and use a single file instead of a
|
68 |
+
directory.
|
69 |
+
-cf, --config-file TEXT Config file to use when using the single-
|
70 |
+
file option. Accepts a path or a filename in
|
71 |
+
the same directory as the single file. Will
|
72 |
+
download from the repository passed in the
|
73 |
+
model option if not provided. [default:
|
74 |
+
config.json]
|
75 |
+
-mf, --model-filename TEXT The model file to download when using the
|
76 |
+
single-file option. [default:
|
77 |
+
aniportrait.safetensors]
|
78 |
+
-rs, --remote-subfolder TEXT Remote subfolder to download from when using
|
79 |
+
the single-file option.
|
80 |
+
-c, --cache-dir DIRECTORY Cache directory to download to. Default uses
|
81 |
+
the huggingface cache.
|
82 |
+
-o, --output FILE Output file. [default: output.mp4]
|
83 |
+
--help Show this message and exit.
|
84 |
+
```
|
85 |
+
|
86 |
+
## Python
|
87 |
+
|
88 |
+
You can create the pipeline, automatically pulling the weights from this repository, either as individual models:
|
89 |
|
90 |
```py
|
91 |
from aniportrait import AniPortraitPipeline
|