## :notes: **Updates**
- [x] Mar. 24, 2024. Support interactive demo with gradio.
- [x] Mar. 13, 2024. Release the inference code.
- [x] Mar. 12, 2024. Rep initialization.
---
## 🐱 Abstract
We introduce DragAnything, which utilizes an entity representation to achieve motion control for any object in controllable video generation. Comparison to existing motion control methods, DragAnything offers several advantages. Firstly, trajectory-based is more user-friendly for interaction, when acquiring other guidance signals (\eg{} masks, depth maps) is labor-intensive. Users only need to draw a line~(trajectory) during interaction. Secondly, our entity representation serves as an open-domain embedding capable of representing any object, enabling the control of motion for diverse entities, including background. Lastly, our entity representation allows simultaneous and distinct motion control for multiple objects. Extensive experiments demonstrate that our DragAnything achieves state-of-the-art performance for FVD, FID, and User Study, particularly in terms of object motion control, where our method surpasses the previous state of the art (DragNUWA) by 26% in human voting.
---
## User-Trajectory Interaction with SAM
Input Image |
Drag point with SAM |
2D Gaussian Trajectory |
Generated Video |
![](./assets/1709660422197.jpg) |
![](./assets/1709660459944.jpg) |
![](./assets/image28 (3).gif) |
![](./assets/image28 (2).gif) |
![](./assets/1709660422197.jpg) |
![](./assets/1709660471568.jpg) |
![](./assets/image2711.gif) |
![](./assets/image27 (1)1.gif) |
![](./assets/1709660422197.jpg) |
![](./assets/1709660965701.jpg) |
![](./assets/image29111.gif) |
![](./assets/image29 (1)1.gif) |
![](./assets/1709660422197.jpg) |
![](./assets/1709661150250.jpg) |
![](./assets/image30 (1)1.gif) |
![](./assets/image3011.gif) |
## Comparison with DragNUWA
Model |
Input Image and Drag |
Generated Video |
Visualization for Pixel Motion |
DragNUWA |
![](./assets/1709661872632.jpg) |
![](./assets/image63111.gif) |
![](./assets/image6411.gif) |
Ours |
![](./assets/1709662077471.jpg) |
![](./assets/image65111.gif) |
![](./assets/image6611.gif) |
DragNUWA |
![](./assets/1709662293661.jpg) |
![](./assets/image77.gif) |
![](./assets/image76.gif) |
Ours |
![](./assets/1709662429867.jpg) |
![](./assets/image75.gif) |
![](./assets/image74.gif) |
DragNUWA |
![](./assets/1709662596207.jpg) |
![](./assets/image84.gif) |
![](./assets/image85.gif) |
Ours |
![](./assets/1709662724643.jpg) |
![](./assets/image87.gif) |
![](./assets/image88.gif) |
## More Demo
Drag point with SAM |
2D Gaussian |
Generated Video |
Visualization for Pixel Motion |
![](./assets/1709656550343.jpg) |
![](./assets/image188.gif) |
![](./assets/image190.gif) |
![](./assets/image189.gif) |
![](./assets/1709657635807.jpg) |
![](./assets/image187 (1).gif) |
![](./assets/image186.gif) |
![](./assets/image185.gif) |
![](./assets/1709658516913.jpg) |
![](./assets/image158.gif) |
![](./assets/image159.gif) |
![](./assets/image160.gif) |
![](./assets/1709658781935.jpg) |
![](./assets/image163.gif) |
![](./assets/image161.gif) |
![](./assets/image162.gif) |
![](./assets/1709659276722.jpg) |
![](./assets/image165.gif) |
![](./assets/image167.gif) |
![](./assets/image166.gif) |
![](./assets/1709659787625.jpg) |
![](./assets/image172.gif) |
![](./assets/Our_Motorbike_cloud_floor.gif) |
![](./assets/image171.gif) |
## Various Motion Control
Drag point with SAM |
2D Gaussian |
Generated Video |
Visualization for Pixel Motion |
![](./assets/1709663429471.jpg) |
![](./assets/image265.gif) |
![](./assets/image265 (1).gif) |
![](./assets/image268.gif) |
![](./assets/1709663831581.jpg) |
![](./assets/image274.gif) |
![](./assets/image274 (1).gif) |
![](./assets/image276.gif) |
(a) Motion Control for Foreground |
![](./assets/1709664593048.jpg) |
![](./assets/image270.gif) |
![](./assets/image270 (1).gif) |
![](./assets/image269.gif) |
![](./assets/1709664834397.jpg) |
![](./assets/image271.gif) |
![](./assets/image271 (1).gif) |
![](./assets/image272.gif) |
(b) Motion Control for Background |
![](./assets/1709665073460.jpg) |
![](./assets/image279.gif) |
![](./assets/image278.gif) |
![](./assets/image277.gif) |
![](./assets/1709665252573.jpg) |
![](./assets/image282.gif) |
![](./assets/image280.gif) |
![](./assets/image281.gif) |
(c) Simultaneous Motion Control for Foreground and Background
|
![](./assets/1709665505339.jpg) |
![](./assets/image283.gif) |
![](./assets/image283 (1).gif) |
![](./assets/image285.gif) |
![](./assets/1709666205795.jpg) |
![](./assets/image286.gif) |
![](./assets/image288.gif) |
![](./assets/image287.gif) |
![](./assets/1709666401284.jpg) |
![](./assets/image289.gif) |
![](./assets/image290.gif) |
![](./assets/image291.gif) |
![](./assets/1709666772216.jpg) |
![](./assets/image294.gif) |
![](./assets/image293.gif) |
![](./assets/image292.gif) |
(d) Motion Control for Camera Motion
|
## 🔧 Dependencies and Dataset Prepare
### Dependencies
- Python >= 3.10 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.13.0+cu11.7](https://pytorch.org/)
```Shell
git clone https://github.com/Showlab/DragAnything.git
cd DragAnything
conda create -n DragAnything python=3.8
conda activate DragAnything
pip install -r environment.txt
```
### Dataset Prepare
Download [VIPSeg](https://github.com/VIPSeg-Dataset/VIPSeg-Dataset) and [Youtube-VOS](https://youtube-vos.org/) to the ```./data``` directory.
### Motion Trajectory Annotataion Prepare
You can use our preprocessed annotation files or choose to process your own motion trajectory annotation files using [Co-Track](https://github.com/facebookresearch/co-tracker?tab=readme-ov-file#installation-instructions).
If you choose to generate motion trajectory annotations yourself, you need to follow the processing steps outlined in [Co-Track](https://github.com/facebookresearch/co-tracker?tab=readme-ov-file#installation-instructions).
```Shell
cd ./utils/co-tracker
pip install -e .
pip install matplotlib flow_vis tqdm tensorboard
mkdir -p checkpoints
cd checkpoints
wget https://huggingface.co/facebook/cotracker/resolve/main/cotracker2.pth
cd ..
```
Then, modify the corresponding ```video_path```, ```ann_path```, and ```save_path``` in the ```Generate_Trajectory_for_VIPSeg.sh``` file, and run the command. The corresponding trajectory annotations will be saved as .json files in the save_path directory.
```Shell
Generate_Trajectory_for_VIPSeg.sh
```
### Trajectory visualization
You can run the following command for visualization.
```Shell
cd .utils/
python vis_trajectory.py
```
### Pretrained Model Preparation
We adopt the [ChilloutMix](https://civitai.com/models/6424/chilloutmix) as pretrained model for extraction of entity representation, please download the diffusers version:
```bash
mkdir -p utils/pretrained_models
cd utils/pretrained_models
# Diffusers-version ChilloutMix to utils/pretrained_models
git-lfs clone https://huggingface.co/windwhinny/chilloutmix.git
```
And you can download our pretrained model for the controlnet:
```bash
mkdir -p model_out/DragAnything
cd model_out/DragAnything
# Diffusers-version DragAnything to model_out/DragAnything
git-lfs clone https://huggingface.co/weijiawu/DragAnything
```
## :paintbrush: Train(Awaiting release)
### 1) Semantic Embedding Extraction
```Shell
cd .utils/
python extract_semantic_point.py
```
### 2) Train DragAnything
For VIPSeg
```Shell
sh ./script/train_VIPSeg.sh
```
For YouTube VOS
```Shell
sh ./script/train_youtube_vos.sh
```
## :paintbrush: Evaluation
### Evaluation for [FID](https://github.com/mseitzer/pytorch-fid)
```Shell
cd utils
sh Evaluation_FID.sh
```
### Evaluation for [Fréchet Video Distance (FVD)](https://github.com/hyenal/relate/blob/main/extras/README.md)
```Shell
cd utils/Eval_FVD
sh compute_fvd.sh
```
### Evaluation for Eval_ObjMC
```Shell
cd utils/Eval_ObjMC
python ./ObjMC.py
```
## :paintbrush: Inference for single video
```Shell
python demo.py
```
or run the interactive inference with gradio (install the ```gradio==3.50.2```).
```Shell
cd ./script
```
download the weight of ```sam_vit_h_4b8939.pth``` from [SAM](https://github.com/facebookresearch/segment-anything?tab=readme-ov-file#model-checkpoints)
```Shell
python gradio_run.py
```
### :paintbrush: Visulization of pixel motion for the generated video
```Shell
cd utils/co-tracker
python demo.py
```
## 📖BibTeX
@misc{wu2024draganything,
title={DragAnything: Motion Control for Anything using Entity Representation},
author={Weijia Wu, Zhuang Li, Yuchao Gu, Rui Zhao, Yefei He, David Junhao Zhang, Mike Zheng Shou, Yan Li, Tingting Gao, Di Zhang},
year={2024},
eprint={2403.07420},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
## 🤗Acknowledgements
- Thanks to [Diffusers](https://github.com/huggingface/diffusers) for the wonderful work and codebase.
- Thanks to [svd-temporal-controlnet](https://github.com/CiaraStrawberry/svd-temporal-controlnet) for the wonderful work and codebase.
- Thanks to chaojie for building [ComfyUI-DragAnything](https://github.com/chaojie/ComfyUI-DragAnything).