AniDoc / ORIGINAL_README.md
fffiloni's picture
Migrated from GitHub
c705408 verified
|
raw
history blame
5.22 kB

AniDoc: Animation Creation Made Easier

https://github.com/user-attachments/assets/99e1e52a-f0e1-49f5-b81f-e787857901e4

AniDoc: Animation Creation Made Easier

Yihao Meng1,2, Hao Ouyang2, Hanlin Wang3,2, Qiuyu Wang2, Wen Wang4,2, Ka Leong Cheng1,2 , Zhiheng Liu5, Yujun Shen2, Huamin Qu†,2
1HKUST 2Ant Group 3NJU 4ZJU 5HKU corresponding author

AniDoc colorizes a sequence of sketches based on a character design reference with high fidelity, even when the sketches significantly differ in pose and scale.

Strongly recommend seeing our demo page.

Showcases:

GIF

GIF

GIF

GIF

Flexible Usage:

Same Reference with Varying Sketches

GIF Animation GIF Animation GIF Animation
Satoru Gojo from Jujutsu Kaisen

Same Sketch with Different References.

GIF Animation GIF Animation GIF Animation
Anya Forger from Spy x Family

TODO List

  • Release the paper and demo page. Visit https://yihao-meng.github.io/AniDoc_demo/
  • Release the inference code.
  • Build Gradio Demo
  • Release the training code.
  • Release the sparse sketch setting interpolation code.

Requirements:

The training is conducted on 8 A100 GPUs (80GB VRAM), the inference is tested on RTX 5000 (32GB VRAM). In our test, the inference requires about 14GB VRAM.

Setup

git clone https://github.com/yihao-meng/AniDoc.git
cd AniDoc

Environment

All the tests are conducted in Linux. We suggest running our code in Linux. To set up our environment in Linux, please run:

conda create -n anidoc python=3.8 -y
conda activate anidoc

bash install.sh

Checkpoints

  1. please download the pre-trained stable video diffusion (SVD) checkpoints from here, and put the whole folder under pretrained_weight, it should look like ./pretrained_weights/stable-video-diffusion-img2vid-xt
  2. please download the checkpoint for our Unet and ControlNet from here, and put the whole folder as ./pretrained_weights/anidoc.
  3. please download the co_tracker checkpoint from here and put it as ./pretrained_weights/cotracker2.pth.

Generate Your Animation!

To colorize the target lineart sequence with a specific character design, you can run the following command:

bash  scripts_infer/anidoc_inference.sh

We provide some test cases in data_test folder. You can also try our model with your own data. You can change the lineart sequence and corresponding character design in the script anidoc_inference.sh, where --control_image refers to the lineart sequence and --ref_image refers to the character design.

Citation:

Don't forget to cite this source if it proves useful in your research!

@article{meng2024anidoc,
      title={AniDoc: Animation Creation Made Easier},
      author={Yihao Meng and Hao Ouyang and Hanlin Wang and Qiuyu Wang and Wen Wang and Ka Leong Cheng and Zhiheng Liu and Yujun Shen and Huamin Qu},
      journal={arXiv preprint arXiv:2412.14173},
      year={2024}
}