MyNiuuu commited on
Commit
41a9b74
1 Parent(s): 992d625

acknowledgements

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -36,9 +36,10 @@ We have released the Gradio inference code and the checkpoints for trajectory-ba
36
 
37
  ## 📰 CODE RELEASE
38
  - [x] (2024.05.31) Gradio demo and checkpoints for trajectory-based image animation
 
39
  - [ ] Inference scripts and checkpoints for keypoint-based facial image animation
 
40
  - [ ] Inference Gradio demo for hybrid image animation
41
- - [ ] Training codes
42
 
43
 
44
  ## Introduction
@@ -69,11 +70,16 @@ During the training stage, we generate sparse control signals through sparse mot
69
  Our inference demo is based on Gradio. Please refer to `./MOFA-Video-Traj/README.md` for instructions.
70
 
71
 
72
-
 
 
 
 
 
 
 
 
73
 
74
  ## Acknowledgements
75
- Our Gradio codes are based on the early release of [DragNUWA](https://arxiv.org/abs/2308.08089).
76
- The landmark generation code used in our repository is based on [SadTalker](https://github.com/OpenTalker/SadTalker) and [AniPortrait](https://github.com/Zejun-Yang/AniPortrait).
77
- Our training codes are based on [Diffusers](https://github.com/huggingface/diffusers) and [SVD_Xtend](https://github.com/pixeli99/SVD_Xtend).
78
- We sincerely appreciate the code release of these projects.
79
 
 
36
 
37
  ## 📰 CODE RELEASE
38
  - [x] (2024.05.31) Gradio demo and checkpoints for trajectory-based image animation
39
+ - [ ] Training scripts for trajectory-based image animation
40
  - [ ] Inference scripts and checkpoints for keypoint-based facial image animation
41
+ - [ ] Training scripts for keypoint-based facial image animation
42
  - [ ] Inference Gradio demo for hybrid image animation
 
43
 
44
 
45
  ## Introduction
 
70
  Our inference demo is based on Gradio. Please refer to `./MOFA-Video-Traj/README.md` for instructions.
71
 
72
 
73
+ ## Citation
74
+ ```
75
+ @article{niu2024mofa,
76
+ title={MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model},
77
+ author={Niu, Muyao and Cun, Xiaodong and Wang, Xintao and Zhang, Yong and Shan, Ying and Zheng, Yinqiang},
78
+ journal={arXiv preprint arXiv:2405.20222},
79
+ year={2024}
80
+ }
81
+ ```
82
 
83
  ## Acknowledgements
84
+ We sincerely appreciate the code release of the following projects: [DragNUWA](https://arxiv.org/abs/2308.08089), [SadTalker](https://github.com/OpenTalker/SadTalker), [AniPortrait](https://github.com/Zejun-Yang/AniPortrait), [Diffusers](https://github.com/huggingface/diffusers), [SVD_Xtend](https://github.com/pixeli99/SVD_Xtend), [Conditional-Motion-Propagation](https://github.com/XiaohangZhan/conditional-motion-propagation), and [Unimatch](https://github.com/autonomousvision/unimatch).
 
 
 
85