MohamedIFQ
commited on
Upload 16 files
Browse files- .gitattributes +8 -0
- docs/FAQ.md +46 -0
- docs/best_practice.md +94 -0
- docs/changlelog.md +29 -0
- docs/example_crop.gif +3 -0
- docs/example_crop_still.gif +3 -0
- docs/example_full.gif +3 -0
- docs/example_full_crop.gif +0 -0
- docs/example_full_enhanced.gif +3 -0
- docs/face3d.md +47 -0
- docs/free_view_result.gif +3 -0
- docs/install.md +39 -0
- docs/resize_good.gif +3 -0
- docs/resize_no.gif +3 -0
- docs/sadtalker_logo.png +0 -0
- docs/using_ref_video.gif +3 -0
- docs/webui_extension.md +49 -0
.gitattributes
CHANGED
@@ -33,3 +33,11 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
docs/example_crop_still.gif filter=lfs diff=lfs merge=lfs -text
|
37 |
+
docs/example_crop.gif filter=lfs diff=lfs merge=lfs -text
|
38 |
+
docs/example_full_enhanced.gif filter=lfs diff=lfs merge=lfs -text
|
39 |
+
docs/example_full.gif filter=lfs diff=lfs merge=lfs -text
|
40 |
+
docs/free_view_result.gif filter=lfs diff=lfs merge=lfs -text
|
41 |
+
docs/resize_good.gif filter=lfs diff=lfs merge=lfs -text
|
42 |
+
docs/resize_no.gif filter=lfs diff=lfs merge=lfs -text
|
43 |
+
docs/using_ref_video.gif filter=lfs diff=lfs merge=lfs -text
|
docs/FAQ.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
## Frequency Asked Question
|
3 |
+
|
4 |
+
**Q: `ffmpeg` is not recognized as an internal or external command**
|
5 |
+
|
6 |
+
In Linux, you can install the ffmpeg via `conda install ffmpeg`. Or on Mac OS X, try to install ffmpeg via `brew install ffmpeg`. On windows, make sure you have `ffmpeg` in the `%PATH%` as suggested in [#54](https://github.com/Winfredy/SadTalker/issues/54), then, following [this](https://www.geeksforgeeks.org/how-to-install-ffmpeg-on-windows/) installation to install `ffmpeg`.
|
7 |
+
|
8 |
+
**Q: Running Requirments.**
|
9 |
+
|
10 |
+
Please refer to the discussion here: https://github.com/Winfredy/SadTalker/issues/124#issuecomment-1508113989
|
11 |
+
|
12 |
+
|
13 |
+
**Q: ModuleNotFoundError: No module named 'ai'**
|
14 |
+
|
15 |
+
please check the checkpoint's size of the `epoch_20.pth`. (https://github.com/Winfredy/SadTalker/issues/167, https://github.com/Winfredy/SadTalker/issues/113)
|
16 |
+
|
17 |
+
**Q: Illegal Hardware Error: Mac M1**
|
18 |
+
|
19 |
+
please reinstall the `dlib` by `pip install dlib` individually. (https://github.com/Winfredy/SadTalker/issues/129, https://github.com/Winfredy/SadTalker/issues/109)
|
20 |
+
|
21 |
+
|
22 |
+
**Q: FileNotFoundError: [Errno 2] No such file or directory: checkpoints\BFM_Fitting\similarity_Lm3D_all.mat**
|
23 |
+
|
24 |
+
Make sure you have downloaded the checkpoints and gfpgan as [here](https://github.com/Winfredy/SadTalker#-2-download-trained-models) and placed them in the right place.
|
25 |
+
|
26 |
+
**Q: RuntimeError: unexpected EOF, expected 237192 more bytes. The file might be corrupted.**
|
27 |
+
|
28 |
+
The files are not automatically downloaded. Please update the code and download the gfpgan folders as [here](https://github.com/Winfredy/SadTalker#-2-download-trained-models).
|
29 |
+
|
30 |
+
**Q: CUDA out of memory error**
|
31 |
+
|
32 |
+
please refer to https://stackoverflow.com/questions/73747731/runtimeerror-cuda-out-of-memory-how-setting-max-split-size-mb
|
33 |
+
|
34 |
+
```
|
35 |
+
# windows
|
36 |
+
set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
|
37 |
+
python inference.py ...
|
38 |
+
|
39 |
+
# linux
|
40 |
+
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
|
41 |
+
python inference.py ...
|
42 |
+
```
|
43 |
+
|
44 |
+
**Q: Error while decoding stream #0:0: Invalid data found when processing input [mp3float @ 0000015037628c00] Header missing**
|
45 |
+
|
46 |
+
Our method only support wav or mp3 files as input, please make sure the feeded audios are in these formats.
|
docs/best_practice.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Best Practices and Tips for configuration
|
2 |
+
|
3 |
+
> Our model only works on REAL people or the portrait image similar to REAL person. The anime talking head genreation method will be released in future.
|
4 |
+
|
5 |
+
Advanced confiuration options for `inference.py`:
|
6 |
+
|
7 |
+
| Name | Configuration | default | Explaination |
|
8 |
+
|:------------- |:------------- |:----- | :------------- |
|
9 |
+
| Enhance Mode | `--enhancer` | None | Using `gfpgan` or `RestoreFormer` to enhance the generated face via face restoration network
|
10 |
+
| Background Enhancer | `--background_enhancer` | None | Using `realesrgan` to enhance the full video.
|
11 |
+
| Still Mode | ` --still` | False | Using the same pose parameters as the original image, fewer head motion.
|
12 |
+
| Expressive Mode | `--expression_scale` | 1.0 | a larger value will make the expression motion stronger.
|
13 |
+
| save path | `--result_dir` |`./results` | The file will be save in the newer location.
|
14 |
+
| preprocess | `--preprocess` | `crop` | Run and produce the results in the croped input image. Other choices: `resize`, where the images will be resized to the specific resolution. `full` Run the full image animation, use with `--still` to get better results.
|
15 |
+
| ref Mode (eye) | `--ref_eyeblink` | None | A video path, where we borrow the eyeblink from this reference video to provide more natural eyebrow movement.
|
16 |
+
| ref Mode (pose) | `--ref_pose` | None | A video path, where we borrow the pose from the head reference video.
|
17 |
+
| 3D Mode | `--face3dvis` | False | Need additional installation. More details to generate the 3d face can be founded [here](docs/face3d.md).
|
18 |
+
| free-view Mode | `--input_yaw`,<br> `--input_pitch`,<br> `--input_roll` | None | Genearting novel view or free-view 4D talking head from a single image. More details can be founded [here](https://github.com/Winfredy/SadTalker#generating-4d-free-view-talking-examples-from-audio-and-a-single-image).
|
19 |
+
|
20 |
+
|
21 |
+
### About `--preprocess`
|
22 |
+
|
23 |
+
Our system automatically handles the input images via `crop`, `resize` and `full`.
|
24 |
+
|
25 |
+
In `crop` mode, we only generate the croped image via the facial keypoints and generated the facial anime avator. The animation of both expression and head pose are realistic.
|
26 |
+
|
27 |
+
> Still mode will stop the eyeblink and head pose movement.
|
28 |
+
|
29 |
+
| [input image @bagbag1815](https://twitter.com/bagbag1815/status/1642754319094108161) | crop | crop w/still |
|
30 |
+
|:--------------------: |:--------------------: | :----: |
|
31 |
+
| <img src='../examples/source_image/full_body_2.png' width='380'> | ![full_body_2](example_crop.gif) | ![full_body_2](example_crop_still.gif) |
|
32 |
+
|
33 |
+
|
34 |
+
In `resize` mode, we resize the whole images to generate the fully talking head video. Thus, an image similar to the ID photo can be produced. ⚠️ It will produce bad results for full person images.
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
| <img src='../examples/source_image/full_body_2.png' width='380'> | <img src='../examples/source_image/full4.jpeg' width='380'> |
|
40 |
+
|:--------------------: |:--------------------: |
|
41 |
+
| ❌ not suitable for resize mode | ✅ good for resize mode |
|
42 |
+
| <img src='resize_no.gif'> | <img src='resize_good.gif' width='380'> |
|
43 |
+
|
44 |
+
In `full` mode, our model will automatically process the croped region and paste back to the original image. Remember to use `--still` to keep the original head pose.
|
45 |
+
|
46 |
+
| input | `--still` | `--still` & `enhancer` |
|
47 |
+
|:--------------------: |:--------------------: | :--:|
|
48 |
+
| <img src='../examples/source_image/full_body_2.png' width='380'> | <img src='./example_full.gif' width='380'> | <img src='./example_full_enhanced.gif' width='380'>
|
49 |
+
|
50 |
+
|
51 |
+
### About `--enhancer`
|
52 |
+
|
53 |
+
For higher resolution, we intergate [gfpgan](https://github.com/TencentARC/GFPGAN) and [real-esrgan](https://github.com/xinntao/Real-ESRGAN) for different purpose. Just adding `--enhancer <gfpgan or RestoreFormer>` or `--background_enhancer <realesrgan>` for the enhancement of the face and the full image.
|
54 |
+
|
55 |
+
```bash
|
56 |
+
# make sure above packages are available:
|
57 |
+
pip install gfpgan
|
58 |
+
pip install realesrgan
|
59 |
+
```
|
60 |
+
|
61 |
+
### About `--face3dvis`
|
62 |
+
|
63 |
+
This flag indicate that we can generated the 3d-rendered face and it's 3d facial landmarks. More details can be founded [here](face3d.md).
|
64 |
+
|
65 |
+
| Input | Animated 3d face |
|
66 |
+
|:-------------: | :-------------: |
|
67 |
+
| <img src='../examples/source_image/art_0.png' width='200px'> | <video src="https://user-images.githubusercontent.com/4397546/226856847-5a6a0a4d-a5ec-49e2-9b05-3206db65e8e3.mp4"></video> |
|
68 |
+
|
69 |
+
> Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.
|
70 |
+
|
71 |
+
|
72 |
+
|
73 |
+
#### Reference eye-link mode.
|
74 |
+
|
75 |
+
| Input, w/ reference video , reference video |
|
76 |
+
|:-------------: |
|
77 |
+
| ![free_view](using_ref_video.gif)|
|
78 |
+
| If the reference video is shorter than the input audio, we will loop the reference video .
|
79 |
+
|
80 |
+
|
81 |
+
|
82 |
+
#### Generating 4D free-view talking examples from audio and a single image
|
83 |
+
|
84 |
+
We use `input_yaw`, `input_pitch`, `input_roll` to control head pose. For example, `--input_yaw -20 30 10` means the input head yaw degree changes from -20 to 30 and then changes from 30 to 10.
|
85 |
+
```bash
|
86 |
+
python inference.py --driven_audio <audio.wav> \
|
87 |
+
--source_image <video.mp4 or picture.png> \
|
88 |
+
--result_dir <a file to store results> \
|
89 |
+
--input_yaw -20 30 10
|
90 |
+
```
|
91 |
+
|
92 |
+
| Results, Free-view results, Novel view results |
|
93 |
+
|:-------------: |
|
94 |
+
| ![free_view](free_view_result.gif)|
|
docs/changlelog.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## changelogs
|
2 |
+
|
3 |
+
|
4 |
+
- __[2023.04.06]__: stable-diffiusion webui extension is release.
|
5 |
+
|
6 |
+
- __[2023.04.03]__: Enable TTS in huggingface and gradio local demo.
|
7 |
+
|
8 |
+
- __[2023.03.30]__: Launch beta version of the full body mode.
|
9 |
+
|
10 |
+
- __[2023.03.30]__: Launch new feature: through using reference videos, our algorithm can generate videos with more natural eye blinking and some eyebrow movement.
|
11 |
+
|
12 |
+
- __[2023.03.29]__: `resize mode` is online by `python infererence.py --preprocess resize`! Where we can produce a larger crop of the image as discussed in https://github.com/Winfredy/SadTalker/issues/35.
|
13 |
+
|
14 |
+
- __[2023.03.29]__: local gradio demo is online! `python app.py` to start the demo. New `requirments.txt` is used to avoid the bugs in `librosa`.
|
15 |
+
|
16 |
+
- __[2023.03.28]__: Online demo is launched in [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/vinthony/SadTalker), thanks AK!
|
17 |
+
|
18 |
+
- __[2023.03.22]__: Launch new feature: generating the 3d face animation from a single image. New applications about it will be updated.
|
19 |
+
|
20 |
+
- __[2023.03.22]__: Launch new feature: `still mode`, where only a small head pose will be produced via `python inference.py --still`.
|
21 |
+
|
22 |
+
- __[2023.03.18]__: Support `expression intensity`, now you can change the intensity of the generated motion: `python inference.py --expression_scale 1.3 (some value > 1)`.
|
23 |
+
|
24 |
+
- __[2023.03.18]__: Reconfig the data folders, now you can download the checkpoint automatically using `bash scripts/download_models.sh`.
|
25 |
+
- __[2023.03.18]__: We have offically integrate the [GFPGAN](https://github.com/TencentARC/GFPGAN) for face enhancement, using `python inference.py --enhancer gfpgan` for better visualization performance.
|
26 |
+
- __[2023.03.14]__: Specify the version of package `joblib` to remove the errors in using `librosa`, [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Winfredy/SadTalker/blob/main/quick_demo.ipynb) is online!
|
27 |
+
- __[2023.03.06]__: Solve some bugs in code and errors in installation
|
28 |
+
- __[2023.03.03]__: Release the test code for audio-driven single image animation!
|
29 |
+
- __[2023.02.28]__: SadTalker has been accepted by CVPR 2023!
|
docs/example_crop.gif
ADDED
Git LFS Details
|
docs/example_crop_still.gif
ADDED
Git LFS Details
|
docs/example_full.gif
ADDED
Git LFS Details
|
docs/example_full_crop.gif
ADDED
docs/example_full_enhanced.gif
ADDED
Git LFS Details
|
docs/face3d.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## 3D Face Visualization
|
2 |
+
|
3 |
+
We use `pytorch3d` to visualize the 3D faces from a single image.
|
4 |
+
|
5 |
+
The requirements for 3D visualization are difficult to install, so here's a tutorial:
|
6 |
+
|
7 |
+
```bash
|
8 |
+
git clone https://github.com/OpenTalker/SadTalker.git
|
9 |
+
cd SadTalker
|
10 |
+
conda create -n sadtalker3d python=3.8
|
11 |
+
source activate sadtalker3d
|
12 |
+
|
13 |
+
conda install ffmpeg
|
14 |
+
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
|
15 |
+
conda install libgcc gmp
|
16 |
+
|
17 |
+
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
|
18 |
+
|
19 |
+
# insintall pytorch3d
|
20 |
+
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html
|
21 |
+
|
22 |
+
pip install -r requirements3d.txt
|
23 |
+
|
24 |
+
### install gpfgan for enhancer
|
25 |
+
pip install git+https://github.com/TencentARC/GFPGAN
|
26 |
+
|
27 |
+
|
28 |
+
### when occurs gcc version problem `from pytorch import _C` from pytorch3d, add the anaconda path to LD_LIBRARY_PATH
|
29 |
+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/$YOUR_ANACONDA_PATH/lib/
|
30 |
+
|
31 |
+
```
|
32 |
+
|
33 |
+
Then, generate the result via:
|
34 |
+
|
35 |
+
```bash
|
36 |
+
|
37 |
+
|
38 |
+
python inference.py --driven_audio <audio.wav> \
|
39 |
+
--source_image <video.mp4 or picture.png> \
|
40 |
+
--result_dir <a file to store results> \
|
41 |
+
--face3dvis
|
42 |
+
|
43 |
+
```
|
44 |
+
|
45 |
+
The results will appear, named `face3d.mp4`.
|
46 |
+
|
47 |
+
More applications about 3D face rendering will be released soon.
|
docs/free_view_result.gif
ADDED
Git LFS Details
|
docs/install.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
### macOS
|
2 |
+
|
3 |
+
This method has been tested on a M1 Mac (13.3)
|
4 |
+
|
5 |
+
```bash
|
6 |
+
git clone https://github.com/OpenTalker/SadTalker.git
|
7 |
+
cd SadTalker
|
8 |
+
conda create -n sadtalker python=3.8
|
9 |
+
conda activate sadtalker
|
10 |
+
# install pytorch 2.0
|
11 |
+
pip install torch torchvision torchaudio
|
12 |
+
conda install ffmpeg
|
13 |
+
pip install -r requirements.txt
|
14 |
+
pip install dlib # macOS needs to install the original dlib.
|
15 |
+
```
|
16 |
+
|
17 |
+
### Windows Native
|
18 |
+
|
19 |
+
- Make sure you have `ffmpeg` in the `%PATH%` as suggested in [#54](https://github.com/Winfredy/SadTalker/issues/54), following [this](https://www.geeksforgeeks.org/how-to-install-ffmpeg-on-windows/) tutorial to install `ffmpeg` or using scoop.
|
20 |
+
|
21 |
+
|
22 |
+
### Windows WSL
|
23 |
+
|
24 |
+
|
25 |
+
- Make sure the environment: `export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH`
|
26 |
+
|
27 |
+
|
28 |
+
### Docker Installation
|
29 |
+
|
30 |
+
A community Docker image by [@thegenerativegeneration](https://github.com/thegenerativegeneration) is available on the [Docker hub](https://hub.docker.com/repository/docker/wawa9000/sadtalker), which can be used directly:
|
31 |
+
```bash
|
32 |
+
docker run --gpus "all" --rm -v $(pwd):/host_dir wawa9000/sadtalker \
|
33 |
+
--driven_audio /host_dir/deyu.wav \
|
34 |
+
--source_image /host_dir/image.jpg \
|
35 |
+
--expression_scale 1.0 \
|
36 |
+
--still \
|
37 |
+
--result_dir /host_dir
|
38 |
+
```
|
39 |
+
|
docs/resize_good.gif
ADDED
Git LFS Details
|
docs/resize_no.gif
ADDED
Git LFS Details
|
docs/sadtalker_logo.png
ADDED
docs/using_ref_video.gif
ADDED
Git LFS Details
|
docs/webui_extension.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Run SadTalker as a Stable Diffusion WebUI Extension.
|
2 |
+
|
3 |
+
1. Install the lastest version of [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) and install SadTalker via `extension`.
|
4 |
+
<img width="726" alt="image" src="https://user-images.githubusercontent.com/4397546/230698519-267d1d1f-6e99-4dd4-81e1-7b889259efbd.png">
|
5 |
+
|
6 |
+
2. Download the checkpoints manually, for Linux and Mac:
|
7 |
+
|
8 |
+
```bash
|
9 |
+
|
10 |
+
cd SOMEWHERE_YOU_LIKE
|
11 |
+
|
12 |
+
bash <(wget -qO- https://raw.githubusercontent.com/Winfredy/OpenTalker/main/scripts/download_models.sh)
|
13 |
+
```
|
14 |
+
|
15 |
+
For Windows, you can download all the checkpoints [here](https://github.com/OpenTalker/SadTalker/tree/main#2-download-models).
|
16 |
+
|
17 |
+
3.1. Option 1: put the checkpoint in `stable-diffusion-webui/models/SadTalker` or `stable-diffusion-webui/extensions/SadTalker/checkpoints/`, the checkpoints will be detected automatically.
|
18 |
+
|
19 |
+
3.2. Option 2: Set the path of `SADTALKTER_CHECKPOINTS` in `webui_user.sh`(linux) or `webui_user.bat`(windows) by:
|
20 |
+
|
21 |
+
> only works if you are directly starting webui from `webui_user.sh` or `webui_user.bat`.
|
22 |
+
|
23 |
+
```bash
|
24 |
+
# Windows (webui_user.bat)
|
25 |
+
set SADTALKER_CHECKPOINTS=D:\SadTalker\checkpoints
|
26 |
+
|
27 |
+
# Linux/macOS (webui_user.sh)
|
28 |
+
export SADTALKER_CHECKPOINTS=/path/to/SadTalker/checkpoints
|
29 |
+
```
|
30 |
+
|
31 |
+
4. Start the WebUI via `webui.sh or webui_user.sh(linux)` or `webui_user.bat(windows)` or any other method. SadTalker can also be used in stable-diffusion-webui directly.
|
32 |
+
|
33 |
+
<img width="726" alt="image" src="https://user-images.githubusercontent.com/4397546/230698614-58015182-2916-4240-b324-e69022ef75b3.png">
|
34 |
+
|
35 |
+
## Questions
|
36 |
+
|
37 |
+
1. if you are running on CPU, you need to specific `--disable-safe-unpickle` in `webui_user.sh` or `webui_user.bat`.
|
38 |
+
|
39 |
+
```bash
|
40 |
+
# windows (webui_user.bat)
|
41 |
+
set COMMANDLINE_ARGS="--disable-safe-unpickle"
|
42 |
+
|
43 |
+
# linux (webui_user.sh)
|
44 |
+
export COMMANDLINE_ARGS="--disable-safe-unpickle"
|
45 |
+
```
|
46 |
+
|
47 |
+
|
48 |
+
|
49 |
+
(If you're unable to use the `full` mode, please read this [discussion](https://github.com/Winfredy/SadTalker/issues/78).)
|