File size: 27,663 Bytes
c8e02a0
 
 
2339d40
965ac41
051da37
 
 
 
 
 
 
 
 
 
 
 
965ac41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
051da37
 
 
 
 
 
 
 
 
 
aad8056
 
965ac41
aad8056
 
965ac41
aad8056
62a13cf
 
 
 
 
 
aad8056
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
965ac41
 
aad8056
62a13cf
aad8056
62a13cf
 
 
 
 
 
aad8056
62a13cf
aad8056
 
62a13cf
965ac41
aad8056
 
965ac41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aad8056
 
 
 
 
 
051da37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
965ac41
 
 
051da37
 
 
 
 
 
 
 
 
965ac41
 
 
d077c5b
965ac41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
051da37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
---
license: apache-2.0
---

This repository contains a pruned and partially reorganized version of [AniPortrait](https://github.com/Zejun-Yang/AniPortrait), with some new features.

```
@misc{wei2024aniportrait,
      title={AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animations}, 
      author={Huawei Wei and Zejun Yang and Zhisheng Wang},
      year={2024},
      eprint={2403.17694},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```

# Added Features

## CPU Offloading

In order to maximize VRAM, CPU offloading is made available and enabled by default when using the command-line.

- In python, use `pipeline.enable_model_cpu_offload(gpu_id: int=0)` to enable it.
- In the command-line, pass `--no-offload` or `-no` to disable it.

## Video Reference

In addition to using an image as reference, this repository permits using a video as a reference image using a modified temporally-aware reference U-Net.

### Automatic Masked Composition

When using a video reference, you can pass `paste_back=True` or `--paste-back/-pb` to automatically re-composite the generated face onto the reference video.

### Automatic Face Cropping

In addition to the above, you can pass `crop_to_face=True` or `--crop-to-face/-ctf` to automatically crop the input video to the detected face region prior to inference, then un-crop and compose the results. This means you can directly use this feature on HD video without needing to pre-process any of the inputs.

### Examples

The following examples were produced using the above-referenced procedure, then interpolated and post-processed using [enfugue](https://github.com/painebenjamin/app.enfugue.ai).

They were made using the source videos at Full HD, using the following command-line format:

```sh
aniportrait video.mp4 --video video.mp4 --audio audio.mp3 --crop-to-face --num-inference-steps 50 -cfg 4.5
```

All videos sourced from [Pexels](https://www.pexels.com/), voices are AI-generated reading [Attention is All You Need (ARXIV:1706.03762)](https://arxiv.org/abs/1706.03762)

<table>
  <tr>
    <td>
      <video controls src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/nW0kiagVIP2Nr_K_SjJUP.mp4"</video>
    </td>
    <td>
      <video controls src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/FOj4drfm2OLa-n7JxYyIj.mp4"></video>
    </td>
    <td>
      <video controls src="https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/Tu2ac8Cuvaexo_S-kgpPv.mp4"></video>
    </td>
  </tr>
</table>

# Usage

## Installation

First, install the AniPortrait package into your python environment. If you're creating a new environment for AniPortrait, be sure you also specify the version of torch you want with CUDA support, or else this will try to run only on CPU.

```sh
pip install git+https://github.com/painebenjamin/aniportrait.git
```

## Command-Line

A command-line utility `aniportrait` is installed with the package.

```sh
Usage: aniportrait [OPTIONS] INPUT_IMAGE_OR_VIDEO

  Run AniPortrait on an input image with a video, and/or audio file. When only
  a video file is provided, a video-to-video (face reenactment) animation is
  performed. When only an audio file is provided, an audio-to-video (lip-sync)
  animation is performed. When both a video and audio file are provided, a
  video-to-video animation is performed with the audio as guidance for the
  face and mouth movements.

Options:
  -v, --video FILE                Video file to drive the animation.
  -a, --audio FILE                Audio file to drive the animation.
  -fps, --frame-rate INTEGER      Video FPS. Also controls the sampling rate
                                  of the audio. Will default to the video FPS
                                  if a video file is provided, or 30 if not.
  -cfg, --guidance-scale FLOAT    Guidance scale for the diffusion process.
                                  [default: 3.5]
  -ns, --num-inference-steps INTEGER
                                  Number of diffusion steps.  [default: 20]
  -cf, --context-frames INTEGER   Number of context frames to use.  [default:
                                  16]
  -co, --context-overlap INTEGER  Number of context frames to overlap.
                                  [default: 4]
  -nf, --num-frames INTEGER       An explicit number of frames to use. When
                                  not passed, use the length of the audio or
                                  video
  -s, --seed INTEGER              Random seed.
  -w, --width INTEGER             Output video width. Defaults to the input
                                  image width.
  -h, --height INTEGER            Output video height. Defaults to the input
                                  image height.
  -m, --model TEXT                HuggingFace model name.
  -nh, --no-half                  Do not use half precision.
  -no, --no-offload               Do not offload to the CPU to preserve GPU
                                  memory.
  -g, --gpu-id INTEGER            GPU ID to use.
  -sf, --model-single-file        Download and use a single file instead of a
                                  directory.
  -cf, --config-file TEXT         Config file to use when using the model-
                                  single-file option. Accepts a path or a
                                  filename in the same directory as the single
                                  file. Will download from the repository
                                  passed in the model option if not provided.
                                  [default: config.json]
  -mf, --model-filename TEXT      The model file to download when using the
                                  model-single-file option.  [default:
                                  aniportrait.safetensors]
  -rs, --remote-subfolder TEXT    Remote subfolder to download from when using
                                  the model-single-file option.
  -cd, --cache-dir DIRECTORY      Cache directory to download to. Default uses
                                  the huggingface cache.
  -o, --output FILE               Output file.  [default: output.mp4]
  -pb, --paste-back               Paste the original background back in.
  -pbcf, --paste-back-color-fix [adain|wavelet]
                                  Color fix method to use when pasting back.
                                  [default: wavelet]
  -ctf, --crop-to-face            Crop the input to the face prior to
                                  execution, then merge the cropped result
                                  with the uncropped image. Implies --paste-
                                  back.
  -pop, --pose-output FILE        When passed, save the pose image(s) to this
                                  file.
  -mop, --mask-output FILE        When passed, save the mask image(s) to this
                                  file.
  -cop, --combined-output FILE    When passed, save the combined image(s) to
                                  this file.
  -mb, --mask-blur INTEGER        Amount of blur to apply to the mask when
                                  using cropping or pasting.  [default: 15]
  -md, --mask-dilate INTEGER      Amount of dilation to apply to the mask when
                                  using cropping or pasting.  [default: 31]
  -ms, --mask-slow                Use a slower, more accurate mask generation
                                  method.
  -lss, --leading-seconds-silence FLOAT
                                  Seconds of silence to add to the beginning
                                  of the audio.  [default: 0.0]
  -tss, --trailing-seconds-silence FLOAT
                                  Seconds of silence to add to the end of the
                                  audio.  [default: 0.0]
  --help                          Show this message and exit.
```

## Python

You can create the pipeline, automatically pulling the weights from this repository, either as individual models:

```py
from aniportrait import AniPortraitPipeline
pipeline = AniPortraitPipeline.from_pretrained(
  "benjamin-paine/aniportrait",
  torch_dtype=torch.float16,
  variant="fp16",
).to("cuda", dtype=torch.float16)
```

Or, as a single file:

```py
from aniportrait import AniPortraitPipeline
pipeline = AniPortraitPipeline.from_single_file(
  "benjamin-paine/aniportrait",
  torch_dtype=torch.float16,
  variant="fp16",
).to("cuda", dtype=torch.float16)
```

The `AniPortraitPipeline` is a mega pipeline, capable of instantiating and executing other pipelines. It provides the following functions:

## Workflows

### img2img

```py
pipeline.img2img(
    reference_image: PIL.Image.Image,
    pose_reference_image: PIL.Image.Image,
    num_inference_steps: int,
    guidance_scale: float,
    eta: float=0.0,
    reference_pose_image: Optional[Image.Image]=None,
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
    output_type: Optional[str]="pil",
    return_dict: bool=True,
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
    callback_steps: Optional[int]=None,
    width: Optional[int]=None,
    height: Optional[int]=None,
    **kwargs: Any
) -> Pose2VideoPipelineOutput
```

Using a reference image (for structure) and a pose reference image (for pose), render an image of the former in the pose of the latter.
- The pose reference image here is an unprocessed image, from which the face pose will be extracted.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.

### vid2vid

```py
pipeline.vid2vid(
    reference_image: PIL.Image.Image,
    pose_reference_images: List[PIL.Image.Image],
    num_inference_steps: int,
    guidance_scale: float,
    eta: float=0.0,
    reference_pose_image: Optional[Image.Image]=None,
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
    output_type: Optional[str]="pil",
    return_dict: bool=True,
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
    callback_steps: Optional[int]=None,
    width: Optional[int]=None,
    height: Optional[int]=None,
    video_length: Optional[int]=None,
    context_schedule: str="uniform",
    context_frames: int=16,
    context_overlap: int=4,
    context_batch_size: int=1,
    interpolation_factor: int=1,
    use_long_video: bool=True,
    **kwargs: Any
) -> Pose2VideoPipelineOutput
```

Using a reference image (for structure) and a sequence of pose reference images (for pose), render a video of the former in the poses of the latter, using context windowing for long-video generation when the poses are longer than 16 frames.
- Optionally pass `use_long_video = false` to disable using the long video pipeline.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose reference images.

### audio2vid

```py
pipeline.audio2vid(
    audio: str,
    reference_image: PIL.Image.Image,
    num_inference_steps: int,
    guidance_scale: float,
    fps: int=30,
    eta: float=0.0,
    reference_pose_image: Optional[Image.Image]=None,
    pose_reference_images: Optional[List[PIL.Image.Image]]=None,
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
    output_type: Optional[str]="pil",
    return_dict: bool=True,
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
    callback_steps: Optional[int]=None,
    width: Optional[int]=None,
    height: Optional[int]=None,
    video_length: Optional[int]=None,
    context_schedule: str="uniform",
    context_frames: int=16,
    context_overlap: int=4,
    context_batch_size: int=1,
    interpolation_factor: int=1,
    use_long_video: bool=True,
    pose_filename: Optional[str]=None,                                                                                                                                               
    leading_seconds_silence: float=0.0,                                                                                                                                              
    trailing_seconds_silence: float=0.0,     
    **kwargs: Any
) -> Pose2VideoPipelineOutput
```

Using an audio file, draw `fps` face pose images per second for the duration of the audio. Then, using those face pose images, render a video.
- Optionally include a list of images to extract the poses from prior to merging with audio-generated poses (in essence, pass a video here to control non-speech motion). The default is a moderately active loop of head movement.
- Optionally pass width/height to modify the size. Defaults to reference image size.
- Optionally pass `use_long_video = false` to disable using the long video pipeline.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose reference images or the length of the audio frames (when translated to the proper FPS) - whichever is shorter.
- Optionally pass `leading_seconds_silence` and/or `trailing_seconds_silence` to add silent frame(s) to the beginning and/or end of the audio. This will be adjusted for your passed or detected frame rate.
- Optionally pass `pose_filename`, `mask_filename` and/or `combined_filename` to save the pose, mask and/or combined frames to this video file for debugging.

### audiovid2vid

```py
pipeline.audiovid2vid(                                                                                                                                                                                                                                                                                                                                       
    audio: str,                                                                                                                                                                      
    reference_image: List[Image.Image],                                                                                                                                              
    num_inference_steps: int=25,                                                                                                                                                     
    guidance_scale: float=3.5,                                                                                                                                                       
    fps: int=30,                                                                                                                                                                     
    eta: float=0.0,                                                                                                                                                                  
    reference_pose_image: Optional[Image.Image]=None,                                                                                                                                
    pose_reference_images: Optional[List[Image.Image]]=None,                                                                                                                         
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,                                                                                                        
    output_type: Optional[str]="pil",                                                                                                                                                
    return_dict: bool=True,                                                                                                                                                          
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,                                                                                                          
    callback_steps: Optional[int]=None,                                                                                                                                              
    context_schedule: str="uniform",                                                                                                                                                 
    context_frames: int=16,                                                                                                                                                          
    context_overlap: int=4,                                                                                                                                                          
    context_batch_size: int=1,                                                                                                                                                       
    interpolation_factor: int=1,                                                                                                                                                     
    width: Optional[int]=None,                                                                                                                                                       
    height: Optional[int]=None,                                                                                                                                                      
    video_length: Optional[int]=None,                                                                                                                                                
    use_long_video: bool=True,                                                                                                                                                       
    paste_back: bool=True,                                                                                                                                                           
    paste_back_color_fix: Optional[Literal["wavelet", "adain"]]="wavelet",                                                                                                           
    crop_to_face: bool=False,                                                                                                                                                        
    crop_to_face_target_size: Optional[int]=512,                                                                                                                                     
    crop_to_face_padding: Optional[int]=64,                                                                                                                                          
    mask_filename: Optional[str]=None,                                                                                                                                               
    pose_filename: Optional[str]=None,                                                                                                                                               
    combined_filename: Optional[str]=None,                                                                                                                                           
    mask_dilate: Optional[int]=31,                                                                                                                                                   
    mask_gaussian_kernel_size: Optional[int]=15,                                                                                                                                     
    mask_first_frame: bool=True,                                                                                                                                                     
    leading_seconds_silence: float=0.0,                                                                                                                                              
    trailing_seconds_silence: float=0.0,                                                                                                                                             
    **kwargs: Any                                                                                                                                                                    
) -> Pose2VideoPipelineOutput:
```

Using an audio file, draw `fps` face pose images per second for the duration of the audio. Then, using those face pose images, render a video using a video as a reference.
- Optionally pass width/height to modify the size. Defaults to reference image size.
- Optionally pass `use_long_video = false` to disable using the long video pipeline.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose reference images or the length of the audio frames (when translated to the proper FPS) - whichever is shorter.
- Optionally pass `paste_back = true` to re-composite the output onto the input.
- When using `paste_back`, the face is color-fixed when re-pasting in order to reduce visible difference. The default method is `wavelet`, pass `adain` or `None` for other options.
- Optionally pass `crop_to_face = true` to crop all images to the face region (with padding) prior to diffusion. This implies `paste_back = true`.
- When using `crop_to_face`, we must first identify where the faces are in a potentially large image - to do this we perform a slow tiled face detection across the whole image. In order to reduce time to generate additional masks, the default behavior is to only do this once, then use the faces from the first frame to know where to detect in subsequent frames. Set this to `false` to perform tiled faced detection on every frame of the input - this is slower but will allow for more variability in the frame.
- Optionally pass `leading_seconds_silence` and/or `trailing_seconds_silence` to add silent frame(s) to the beginning and/or end of the audio. This will be adjusted for your passed or detected frame rate.
- Optionally pass `pose_filename`, `mask_filename` and/or `combined_filename` to save the pose, mask and/or combined frames to this video file for debugging.

## Internals/Helpers

### img2pose

```py
pipeline.img2pose(
	reference_image: PIL.Image.Image,
	width: Optional[int]=None,
	height: Optional[int]=None
) -> PIL.Image.Image
```

Detects face landmarks in an image and draws a face pose image.
- Optionally modify the original width and height.

### vid2pose

```py
pipeline.vid2pose(
	reference_image: PIL.Image.Image,
    retarget_image: Optional[PIL.Image.Image],
	width: Optional[int]=None,
	height: Optional[int]=None
) -> List[PIL.Image.Image]
```

Detects face landmarks in a series of images and draws pose images.
- Optionally modify the original width and height.
- Optionally retarget to a different face position, useful for video-to-video tasks.

### audio2pose

```py
pipeline.audio2pose(
    audio_path: str,
    fps: int=30,
    reference_image: Optional[PIL.Image.Image]=None,
    pose_reference_images: Optional[List[PIL.Image.Image]]=None,
    width: Optional[int]=None,
	height: Optional[int]=None
) -> List[PIL.Image.Image]
```

Using an audio file, draw `fps` face pose images per second for the duration of the audio.
- Optionally include a reference image to extract the face shape and initial position from. Default has a generic androgynous face shape.
- Optionally include a list of images to extract the poses from prior to merging with audio-generated poses (in essence, pass a video here to control non-speech motion). The default is a moderately active loop of head movement.
- Optionally pass width/height to modify the size. Defaults to reference image size, then pose image sizes, then 256.

### pose2img

```py
pipeline.pose2img(
    reference_image: PIL.Image.Image,
    pose_image: PIL.Image.Image,
    num_inference_steps: int,
    guidance_scale: float,
    eta: float=0.0,
    reference_pose_image: Optional[Image.Image]=None,
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
    output_type: Optional[str]="pil",
    return_dict: bool=True,
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
    callback_steps: Optional[int]=None,
    width: Optional[int]=None,
    height: Optional[int]=None,
    **kwargs: Any
) -> Pose2VideoPipelineOutput
```

Using a reference image (for structure) and a pose image (for pose), render an image of the former in the pose of the latter.
- The pose image here is a processed face pose. To pass a non-processed face pose, see `img2img`.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.

### pose2vid

```py
pipeline.pose2vid(
    reference_image: PIL.Image.Image,
    pose_images: List[PIL.Image.Image],
    num_inference_steps: int,
    guidance_scale: float,
    eta: float=0.0,
    reference_pose_image: Optional[Image.Image]=None,
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
    output_type: Optional[str]="pil",
    return_dict: bool=True,
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
    callback_steps: Optional[int]=None,
    width: Optional[int]=None,
    height: Optional[int]=None,
    video_length: Optional[int]=None,
    **kwargs: Any
) -> Pose2VideoPipelineOutput
```

Using a reference image (for structure) and pose images (for pose), render a video of the former in the poses of the latter.
- The pose images here are a processed face poses. To non-processed face poses, see `vid2vid`.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose images.

### pose2vid_long

```py
pipeline.pose2vid_long(
    reference_image: PIL.Image.Image,
    pose_images: List[PIL.Image.Image],
    num_inference_steps: int,
    guidance_scale: float,
    eta: float=0.0,
    reference_pose_image: Optional[Image.Image]=None,
    generation: Optional[Union[torch.Generator, List[torch.Generator]]]=None,
    output_type: Optional[str]="pil",
    return_dict: bool=True,
    callback: Optional[Callable[[int, int, torch.FloatTensor], None]]=None,
    callback_steps: Optional[int]=None,
    width: Optional[int]=None,
    height: Optional[int]=None,
    video_length: Optional[int]=None,
    context_schedule: str="uniform",
    context_frames: int=16,
    context_overlap: int=4,
    context_batch_size: int=1,
    interpolation_factor: int=1,
    **kwargs: Any
) -> Pose2VideoPipelineOutput
```

Using a reference image (for structure) and pose images (for pose), render a video of the former in the poses of the latter, using context windowing for long-video generation.
- The pose images here are a processed face poses. To non-processed face poses, see `vid2vid`.
- Optionally pass `reference_pose_image` to designate the pose of `reference_image`. When not passed, the pose of `reference_image` is automatically detected.
- Optionally pass `video_length` to use this many frames. Default is the same as the length of the pose images.