# DragAnything ###
DragAnything: Motion Control for Anything using Entity Representation


## :notes: **Updates** - [x] Mar. 24, 2024. Support interactive demo with gradio. - [x] Mar. 13, 2024. Release the inference code. - [x] Mar. 12, 2024. Rep initialization. --- ## 🐱 Abstract We introduce DragAnything, which utilizes an entity representation to achieve motion control for any object in controllable video generation. Comparison to existing motion control methods, DragAnything offers several advantages. Firstly, trajectory-based is more user-friendly for interaction, when acquiring other guidance signals (\eg{} masks, depth maps) is labor-intensive. Users only need to draw a line~(trajectory) during interaction. Secondly, our entity representation serves as an open-domain embedding capable of representing any object, enabling the control of motion for diverse entities, including background. Lastly, our entity representation allows simultaneous and distinct motion control for multiple objects. Extensive experiments demonstrate that our DragAnything achieves state-of-the-art performance for FVD, FID, and User Study, particularly in terms of object motion control, where our method surpasses the previous state of the art (DragNUWA) by 26% in human voting. --- ## User-Trajectory Interaction with SAM
Input Image Drag point with SAM 2D Gaussian Trajectory Generated Video
## Comparison with DragNUWA
Model Input Image and Drag Generated Video Visualization for Pixel Motion
DragNUWA
Ours
DragNUWA
Ours
DragNUWA
Ours
## More Demo
Drag point with SAM 2D Gaussian Generated Video Visualization for Pixel Motion
## Various Motion Control
Drag point with SAM 2D Gaussian Generated Video Visualization for Pixel Motion
(a) Motion Control for Foreground
(b) Motion Control for Background
(c) Simultaneous Motion Control for Foreground and Background
(d) Motion Control for Camera Motion
## 🔧 Dependencies and Dataset Prepare ### Dependencies - Python >= 3.10 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html)) - [PyTorch >= 1.13.0+cu11.7](https://pytorch.org/) ```Shell git clone https://github.com/Showlab/DragAnything.git cd DragAnything conda create -n DragAnything python=3.8 conda activate DragAnything pip install -r environment.txt ``` ### Dataset Prepare Download [VIPSeg](https://github.com/VIPSeg-Dataset/VIPSeg-Dataset) and [Youtube-VOS](https://youtube-vos.org/) to the ```./data``` directory. ### Motion Trajectory Annotataion Prepare You can use our preprocessed annotation files or choose to process your own motion trajectory annotation files using [Co-Track](https://github.com/facebookresearch/co-tracker?tab=readme-ov-file#installation-instructions). If you choose to generate motion trajectory annotations yourself, you need to follow the processing steps outlined in [Co-Track](https://github.com/facebookresearch/co-tracker?tab=readme-ov-file#installation-instructions). ```Shell cd ./utils/co-tracker pip install -e . pip install matplotlib flow_vis tqdm tensorboard mkdir -p checkpoints cd checkpoints wget https://huggingface.co/facebook/cotracker/resolve/main/cotracker2.pth cd .. ``` Then, modify the corresponding ```video_path```, ```ann_path```, and ```save_path``` in the ```Generate_Trajectory_for_VIPSeg.sh``` file, and run the command. The corresponding trajectory annotations will be saved as .json files in the save_path directory. ```Shell Generate_Trajectory_for_VIPSeg.sh ``` ### Trajectory visualization You can run the following command for visualization. ```Shell cd .utils/ python vis_trajectory.py ``` ### Pretrained Model Preparation We adopt the [ChilloutMix](https://civitai.com/models/6424/chilloutmix) as pretrained model for extraction of entity representation, please download the diffusers version: ```bash  mkdir -p utils/pretrained_models cd utils/pretrained_models # Diffusers-version ChilloutMix to utils/pretrained_models git-lfs clone https://huggingface.co/windwhinny/chilloutmix.git ``` And you can download our pretrained model for the controlnet: ```bash  mkdir -p model_out/DragAnything cd model_out/DragAnything # Diffusers-version DragAnything to model_out/DragAnything git-lfs clone https://huggingface.co/weijiawu/DragAnything ``` ## :paintbrush: Train(Awaiting release) ### 1) Semantic Embedding Extraction ```Shell cd .utils/ python extract_semantic_point.py ``` ### 2) Train DragAnything For VIPSeg ```Shell sh ./script/train_VIPSeg.sh ``` For YouTube VOS ```Shell sh ./script/train_youtube_vos.sh ``` ## :paintbrush: Evaluation ### Evaluation for [FID](https://github.com/mseitzer/pytorch-fid) ```Shell cd utils sh Evaluation_FID.sh ``` ### Evaluation for [Fréchet Video Distance (FVD)](https://github.com/hyenal/relate/blob/main/extras/README.md) ```Shell cd utils/Eval_FVD sh compute_fvd.sh ``` ### Evaluation for Eval_ObjMC ```Shell cd utils/Eval_ObjMC python ./ObjMC.py ``` ## :paintbrush: Inference for single video ```Shell python demo.py ``` or run the interactive inference with gradio (install the ```gradio==3.50.2```). ```Shell cd ./script ``` download the weight of ```sam_vit_h_4b8939.pth``` from [SAM](https://github.com/facebookresearch/segment-anything?tab=readme-ov-file#model-checkpoints) ```Shell python gradio_run.py ``` ### :paintbrush: Visulization of pixel motion for the generated video ```Shell cd utils/co-tracker python demo.py ``` ## 📖BibTeX @misc{wu2024draganything, title={DragAnything: Motion Control for Anything using Entity Representation}, author={Weijia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, Di Zhang}, year={2024}, eprint={2403.07420}, archivePrefix={arXiv}, primaryClass={cs.CV} } ## 🤗Acknowledgements - Thanks to [Diffusers](https://github.com/huggingface/diffusers) for the wonderful work and codebase. - Thanks to [svd-temporal-controlnet](https://github.com/CiaraStrawberry/svd-temporal-controlnet) for the wonderful work and codebase. - Thanks to chaojie for building [ComfyUI-DragAnything](https://github.com/chaojie/ComfyUI-DragAnything).