Datasets:

Modalities:
Video
Size:
< 1K
ArXiv:
Tags:
art
Libraries:
Datasets
License:
hanzhn commited on
Commit
1006dbf
·
1 Parent(s): 8a3edd6
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +24 -20
  2. data/sample000000_raw_video.mp4 +3 -0
  3. data/sample000000_src_mask.mp4 +3 -0
  4. data/sample000000_src_video.mp4 +3 -0
  5. data/sample000001_raw_video.mp4 +3 -0
  6. data/sample000001_src_mask.mp4 +3 -0
  7. data/sample000001_src_video.mp4 +3 -0
  8. data/sample000002_raw_video.mp4 +3 -0
  9. data/sample000002_src_mask.mp4 +3 -0
  10. data/sample000002_src_video.mp4 +3 -0
  11. data/sample000003_raw_video.mp4 +3 -0
  12. data/sample000003_src_mask.mp4 +3 -0
  13. data/sample000003_src_video.mp4 +3 -0
  14. data/sample000004_raw_video.mp4 +3 -0
  15. data/sample000004_src_mask.mp4 +3 -0
  16. data/sample000004_src_video.mp4 +3 -0
  17. data/sample000005_raw_video.mp4 +3 -0
  18. data/sample000005_src_mask.mp4 +3 -0
  19. data/sample000005_src_video.mp4 +3 -0
  20. data/sample000006_raw_video.mp4 +3 -0
  21. data/sample000006_src_mask.mp4 +3 -0
  22. data/sample000006_src_video.mp4 +3 -0
  23. data/sample000007_raw_video.mp4 +3 -0
  24. data/sample000007_src_mask.mp4 +3 -0
  25. data/sample000007_src_video.mp4 +3 -0
  26. data/sample000008_raw_video.mp4 +3 -0
  27. data/sample000008_src_mask.mp4 +3 -0
  28. data/sample000008_src_video.mp4 +3 -0
  29. data/sample000009_raw_video.mp4 +3 -0
  30. data/sample000009_src_mask.mp4 +3 -0
  31. data/sample000009_src_video.mp4 +3 -0
  32. data/sample000010_raw_video.mp4 +3 -0
  33. data/sample000010_src_mask.mp4 +3 -0
  34. data/sample000010_src_video.mp4 +3 -0
  35. data/sample000011_raw_video.mp4 +3 -0
  36. data/sample000011_src_mask.mp4 +3 -0
  37. data/sample000011_src_video.mp4 +3 -0
  38. data/sample000012_raw_video.mp4 +3 -0
  39. data/sample000012_src_mask.mp4 +3 -0
  40. data/sample000012_src_video.mp4 +3 -0
  41. data/sample000013_raw_video.mp4 +3 -0
  42. data/sample000013_src_mask.mp4 +3 -0
  43. data/sample000013_src_video.mp4 +3 -0
  44. data/sample000014_raw_video.mp4 +3 -0
  45. data/sample000014_src_mask.mp4 +3 -0
  46. data/sample000014_src_video.mp4 +3 -0
  47. data/sample000015_raw_video.mp4 +3 -0
  48. data/sample000015_src_mask.mp4 +3 -0
  49. data/sample000015_src_video.mp4 +3 -0
  50. data/sample000016_raw_video.mp4 +3 -0
README.md CHANGED
@@ -1,10 +1,7 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
  <p align="center">
6
 
7
  <h1 align="center">VACE: All-in-One Video Creation and Editing</h1>
 
8
  <p align="center">
9
  <strong>Zeyinzi Jiang<sup>*</sup></strong>
10
  ·
@@ -36,18 +33,21 @@ license: apache-2.0
36
 
37
 
38
  ## 🎉 News
 
 
 
39
  - [x] Mar 31, 2025: 🔥VACE-Wan2.1-1.3B-Preview and VACE-LTX-Video-0.9 models are now available at [HuggingFace](https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38) and [ModelScope](https://modelscope.cn/collections/VACE-8fa5fcfd386e43)!
40
  - [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
41
  - [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
42
 
43
 
44
  ## 🪄 Models
45
- | Models | Download Link | Video Size | License |
46
- |--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|-----------------------------------------------------------------------------------------------|
47
- | VACE-Wan2.1-1.3B-Preview | [Huggingface](https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-Wan2.1-1.3B-Preview) 🤖 | ~ 81 x 480 x 832 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt) |
48
- | VACE-Wan2.1-1.3B | [To be released](https://github.com/Wan-Video) <img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 480 x 832 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt) |
49
- | VACE-Wan2.1-14B | [To be released](https://github.com/Wan-Video) <img src='https://ali-vilab.github.io/VACE-Page/assets/logos/wan_logo.png' alt='wan_logo' style='margin-bottom: -4px; height: 15px;'> | ~ 81 x 720 x 1080 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE.txt) |
50
- | VACE-LTX-Video-0.9 | [Huggingface](https://huggingface.co/ali-vilab/VACE-LTX-Video-0.9) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-LTX-Video-0.9) 🤖 | ~ 97 x 512 x 768 | [RAIL-M](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt) |
51
 
52
  - The input supports any resolution, but to achieve optimal results, the video size should fall within a specific range.
53
  - All models inherit the license of the original model.
@@ -100,7 +100,7 @@ VACE
100
 
101
  ## 🚀 Usage
102
  In VACE, users can input **text prompt** and optional **video**, **mask**, and **image** for video generation or editing.
103
- Detailed instructions for using VACE can be found in the [User Guide](https://github.com/ali-vilab/VACE/blob/main/UserGuide.md).
104
 
105
  ### Inference CIL
106
  #### 1) End-to-End Running
@@ -130,7 +130,7 @@ python vace/vace_preproccess.py --task depth --video assets/videos/test.mp4
130
  # process video inpainting by providing bbox
131
  python vace/vace_preproccess.py --task inpainting --mode bbox --bbox 50,50,550,700 --video assets/videos/test.mp4
132
  ```
133
- The outputs will be saved to `./proccessed/` by default.
134
 
135
  > 💡**Note**:
136
  > Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main/run_vace_pipeline.sh) preprocessing methods for different tasks.
@@ -141,13 +141,16 @@ You can also customize preprocessors by implementing at [`annotators`](https://g
141
  #### 3) Model inference
142
  Using the input data obtained from **Preprocessing**, the model inference process can be performed as follows:
143
  ```bash
144
- # For Wan2.1 single GPU inference
145
  python vace/vace_wan_inference.py --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
146
 
147
- # For Wan2.1 Multi GPU Acceleration inference
148
  pip install "xfuser>=0.4.1"
149
  torchrun --nproc_per_node=8 vace/vace_wan_inference.py --dit_fsdp --t5_fsdp --ulysses_size 1 --ring_size 8 --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
150
 
 
 
 
151
  # For LTX inference, run
152
  python vace/vace_ltx_inference.py --ckpt_path <path-to-model> --text_encoder_path <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
153
  ```
@@ -157,12 +160,12 @@ The output video together with intermediate video, mask and images will be saved
157
  > (1) Please refer to [vace/vace_wan_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_ltx_inference.py) for the inference args.
158
  > (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
159
  Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
160
-
161
 
162
  ### Inference Gradio
163
  For preprocessors, run
164
  ```bash
165
- python vace/gradios/preprocess_demo.py
166
  ```
167
  For model inference, run
168
  ```bash
@@ -175,15 +178,16 @@ python vace/gradios/vace_ltx_demo.py
175
 
176
  ## Acknowledgement
177
 
178
- We are grateful for the following awesome projects, including [Scepter](https://github.com/modelscope/scepter), [Wan](https://github.com/Wan-Video/Wan2.1), and [LTX-Video](https://github.com/Lightricks/LTX-Video).
179
 
180
 
181
  ## BibTeX
182
 
183
  ```bibtex
184
- @article{vace,
185
  title = {VACE: All-in-One Video Creation and Editing},
186
  author = {Jiang, Zeyinzi and Han, Zhen and Mao, Chaojie and Zhang, Jingfeng and Pan, Yulin and Liu, Yu},
187
- journal = {arXiv preprint arXiv:2503.07598},
 
188
  year = {2025}
189
- }
 
 
 
 
 
1
  <p align="center">
2
 
3
  <h1 align="center">VACE: All-in-One Video Creation and Editing</h1>
4
+ <h3 align="center">(ICCV 2025)</h3>
5
  <p align="center">
6
  <strong>Zeyinzi Jiang<sup>*</sup></strong>
7
  ·
 
33
 
34
 
35
  ## 🎉 News
36
+ - [x] Oct 17, 2025: [VACE-Benchmark](https://huggingface.co/datasets/ali-vilab/VACE-Benchmark) has been updated to incorporate the evaluation data. [VACE-Page](https://ali-vilab.github.io/VACE-Page/) also features creative community cases, offering researchers and community members better project insight and tracking.
37
+ - [x] Jun 26, 2025: [VACE](https://openaccess.thecvf.com/content/ICCV2025/html/Jiang_VACE_All-in-One_Video_Creation_and_Editing_ICCV_2025_paper.html) is accepted by ICCV 2025.
38
+ - [x] May 14, 2025: 🔥Wan2.1-VACE-1.3B and Wan2.1-VACE-14B models are now available at [HuggingFace](https://huggingface.co/collections/Wan-AI/wan21-68ac4ba85372ae5a8e282a1b) and [ModelScope](https://modelscope.cn/collections/tongyiwanxiang-Wan21-shipinshengcheng-67ec9b23fd8d4f)!
39
  - [x] Mar 31, 2025: 🔥VACE-Wan2.1-1.3B-Preview and VACE-LTX-Video-0.9 models are now available at [HuggingFace](https://huggingface.co/collections/ali-vilab/vace-67eca186ff3e3564726aff38) and [ModelScope](https://modelscope.cn/collections/VACE-8fa5fcfd386e43)!
40
  - [x] Mar 31, 2025: 🔥Release code of model inference, preprocessing, and gradio demos.
41
  - [x] Mar 11, 2025: We propose [VACE](https://ali-vilab.github.io/VACE-Page/), an all-in-one model for video creation and editing.
42
 
43
 
44
  ## 🪄 Models
45
+ | Models | Download Link | Video Size | License |
46
+ |--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------|-----------------------------------------------------------------------------------------------|
47
+ | VACE-Wan2.1-1.3B-Preview | [Huggingface](https://huggingface.co/ali-vilab/VACE-Wan2.1-1.3B-Preview) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-Wan2.1-1.3B-Preview) 🤖 | ~ 81 x 480 x 832 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt) |
48
+ | VACE-LTX-Video-0.9 | [Huggingface](https://huggingface.co/ali-vilab/VACE-LTX-Video-0.9) 🤗 [ModelScope](https://modelscope.cn/models/iic/VACE-LTX-Video-0.9) 🤖 | ~ 97 x 512 x 768 | [RAIL-M](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.license.txt) |
49
+ | Wan2.1-VACE-1.3B | [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B) 🤗 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-VACE-1.3B) 🤖 | ~ 81 x 480 x 832 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B/blob/main/LICENSE.txt) |
50
+ | Wan2.1-VACE-14B | [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) 🤗 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-VACE-14B) 🤖 | ~ 81 x 720 x 1280 | [Apache-2.0](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B/blob/main/LICENSE.txt) |
51
 
52
  - The input supports any resolution, but to achieve optimal results, the video size should fall within a specific range.
53
  - All models inherit the license of the original model.
 
100
 
101
  ## 🚀 Usage
102
  In VACE, users can input **text prompt** and optional **video**, **mask**, and **image** for video generation or editing.
103
+ Detailed instructions for using VACE can be found in the [User Guide](./UserGuide.md).
104
 
105
  ### Inference CIL
106
  #### 1) End-to-End Running
 
130
  # process video inpainting by providing bbox
131
  python vace/vace_preproccess.py --task inpainting --mode bbox --bbox 50,50,550,700 --video assets/videos/test.mp4
132
  ```
133
+ The outputs will be saved to `./processed/` by default.
134
 
135
  > 💡**Note**:
136
  > Please refer to [run_vace_pipeline.sh](https://github.com/ali-vilab/VACE/blob/main/run_vace_pipeline.sh) preprocessing methods for different tasks.
 
141
  #### 3) Model inference
142
  Using the input data obtained from **Preprocessing**, the model inference process can be performed as follows:
143
  ```bash
144
+ # For Wan2.1 single GPU inference (1.3B-480P)
145
  python vace/vace_wan_inference.py --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
146
 
147
+ # For Wan2.1 Multi GPU Acceleration inference (1.3B-480P)
148
  pip install "xfuser>=0.4.1"
149
  torchrun --nproc_per_node=8 vace/vace_wan_inference.py --dit_fsdp --t5_fsdp --ulysses_size 1 --ring_size 8 --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
150
 
151
+ # For Wan2.1 Multi GPU Acceleration inference (14B-720P)
152
+ torchrun --nproc_per_node=8 vace/vace_wan_inference.py --dit_fsdp --t5_fsdp --ulysses_size 8 --ring_size 1 --size 720p --model_name 'vace-14B' --ckpt_dir <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
153
+
154
  # For LTX inference, run
155
  python vace/vace_ltx_inference.py --ckpt_path <path-to-model> --text_encoder_path <path-to-model> --src_video <path-to-src-video> --src_mask <path-to-src-mask> --src_ref_images <paths-to-src-ref-images> --prompt "xxx"
156
  ```
 
160
  > (1) Please refer to [vace/vace_wan_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_wan_inference.py) and [vace/vace_ltx_inference.py](https://github.com/ali-vilab/VACE/blob/main/vace/vace_ltx_inference.py) for the inference args.
161
  > (2) For LTX-Video and English language Wan2.1 users, you need prompt extension to unlock the full model performance.
162
  Please follow the [instruction of Wan2.1](https://github.com/Wan-Video/Wan2.1?tab=readme-ov-file#2-using-prompt-extension) and set `--use_prompt_extend` while running inference.
163
+ > (3) When performing prompt extension in editing tasks, it's important to pay attention to the results of expanding plain text. Since the visual information being input is unknown, this may lead to the extended output not matching the video being edited, which can affect the final outcome.
164
 
165
  ### Inference Gradio
166
  For preprocessors, run
167
  ```bash
168
+ python vace/gradios/vace_preprocess_demo.py
169
  ```
170
  For model inference, run
171
  ```bash
 
178
 
179
  ## Acknowledgement
180
 
181
+ We are grateful for the following awesome projects, including [Scepter](https://github.com/modelscope/scepter), [Wan](https://github.com/Wan-Video/Wan2.1), and [LTX-Video](https://github.com/Lightricks/LTX-Video). Additionally, we extend our deepest gratitude to all community creators. It is their proactive exploration, experimentation, and boundless creativity that have brought immense inspiration to the project, fostering the emergence of even more refined workflows and stunning video generation content based on it. This includes, but is not limited to: [Kijai's Workflow](https://github.com/kijai/ComfyUI-WanVideoWrapper), native code support for [ComfyUI](https://github.com/comfyanonymous/ComfyUI) and [Diffusers](https://github.com/huggingface/diffusers), crucial model quantization support, a diverse ecosystem of LoRA adapters, and the ever-evolving innovative workflows from our community members.
182
 
183
 
184
  ## BibTeX
185
 
186
  ```bibtex
187
+ @inproceedings{vace,
188
  title = {VACE: All-in-One Video Creation and Editing},
189
  author = {Jiang, Zeyinzi and Han, Zhen and Mao, Chaojie and Zhang, Jingfeng and Pan, Yulin and Liu, Yu},
190
+ booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
191
+ pages = {17191-17202},
192
  year = {2025}
193
+ }
data/sample000000_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc9410ed6ad9a32cddfcf51f35e2ef8539ff294767544f9109fb12ba0dfa2f7f
3
+ size 1231563
data/sample000000_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41752d0bae5a9d9a802b8e85deab61d9fbbe59e37254914967dea047dcb8277b
3
+ size 15259
data/sample000000_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d75825c20cd1b8fa41b1b005d541759373381830966d5d91755f86d191e89dbd
3
+ size 15256
data/sample000001_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3599be7932ad12c7e7b1435bd1406bb1242a17b139d24e380bea54c6fe663d6
3
+ size 6149441
data/sample000001_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2bcea7d058f773d8fe995aeb65d963ce31bad5e65b44585098f5cc273f1902a
3
+ size 9344
data/sample000001_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adccb70bbdca8d77cc18820d3954f9f2e2981b9202591732b1cfadd75ebf4d8c
3
+ size 9342
data/sample000002_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e62fe6c0ff5750770b1e9569f7f7c637bfcacd15e4d28e699484a7d60b7457e
3
+ size 1485085
data/sample000002_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d738e3dc8afb2dd0f1da94c254cd62921ef8286376e96fd1b1b2a33470a2e71
3
+ size 6919
data/sample000002_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f369fa47ac7ab423788115ae49ed693d775939233149de50a3ed545a24cd466
3
+ size 6917
data/sample000003_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31d8c8163f77e9c0b2c6fe74c50d2df3b778104414a6c66edebb6203d1be0a2b
3
+ size 413732
data/sample000003_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11bcf90e1cb52621a8d1582f1ffac28a193569bd437a07d6dfea6dbcdb1a1529
3
+ size 13539
data/sample000003_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68d909092cce71a88d914f299c2c53a07329dd30241855be5af492a9eeb38146
3
+ size 13538
data/sample000004_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c23530886d6fec5e640a87dbb34c02e2a8e5ee6949d18c1c98acbbab872f75d
3
+ size 264348
data/sample000004_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4693b4a8f850b0080d877b7abe2a215e2f93cae5c0f381531e1bbc5f74c810b
3
+ size 6862
data/sample000004_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9eb47d2c17d187189f5f6f4f1a3e363291993978e31547d48d15523b9d58aca2
3
+ size 6860
data/sample000005_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfdbc6dccfb25b870516c524b2ed0ffd8e9b9d1f12099073af6778679addcf4a
3
+ size 233961
data/sample000005_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54f538bdfe1318e1cf1fb768bde446db872ebfe4b7c2e2918638f989fc639f6d
3
+ size 6973
data/sample000005_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c189d17a3644579412a6b2ca3e8ed883adb2f02fcff50391ecc68fe942831f7
3
+ size 6972
data/sample000006_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b1ce4ead4444198e9703a650b2a770b402cedd97d0ff2fdc4404d3673d84cff
3
+ size 5607328
data/sample000006_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fcf509e182c822ab13d5bb10441daa294d699a97c2a4fe6ba02acb5aa9c4b6b3
3
+ size 8102
data/sample000006_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5449ffecb3c25752c2a9d458e47853f8dfd2743caac3865db15dce8fedb6413
3
+ size 8100
data/sample000007_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17622a2d682b8d66ad7a1b7fe2de18c27e4981dcb1ed92e47826d6c74efd3909
3
+ size 339778
data/sample000007_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71fa2dc6ac8934193ad33dc883a50e60e682334c76d528e04ac954b4c8fa7a07
3
+ size 7441
data/sample000007_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f863d9c1b6df432d291f03641fb305f74ddd8d62029a2b4ab8531a3bf869bf68
3
+ size 7439
data/sample000008_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e2ed8352989eed418b1e3575e6a6722eeed332112bbeacab7a97f4a3aa85a5b
3
+ size 979799
data/sample000008_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4521eb525248e03c21e21121040e560f750c53249e2502b6cc0c8fd59e769d01
3
+ size 14061
data/sample000008_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2426bc1d17f757166a988432669ea1f96bba6bdad2b329934d3205ad2d56e451
3
+ size 14060
data/sample000009_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fa28ffaed166f0cc95ea67e65119c035cff338aa1489d6f4cc29d85ca580aa2
3
+ size 3828991
data/sample000009_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3493d15d12555ebe9d9391486439ae18337231de74314efbefd4c4a1e884ff59
3
+ size 15159
data/sample000009_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38f13700351c865c904aba11c15f9f102a792b94bbca9b6cd38af1d8942f23f6
3
+ size 15155
data/sample000010_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a7607bb3bec372b4e54a9c05a4e94455604233ac6807763e73c4dc19d41810a
3
+ size 882267
data/sample000010_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ed67a436e073d210f79dbf820746b0742e35924895669bddd9cdec8a1f0b9fa
3
+ size 372349
data/sample000010_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df2e0d8c78cb09d623421001d888eaaa0f859ac1c37bf76efa0ab724756d46cf
3
+ size 458678
data/sample000011_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a1e0591ad62b4cc161d05d71d46ac9d50566d5f448396826e49eec08dc6e5e8
3
+ size 1029640
data/sample000011_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cba7ab123a3cdad680acc670d2392108e7f7d834dc798de34f9dc177451022b
3
+ size 14995
data/sample000011_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b501d29a604a64a5e7b319fd895966ca9c14e3a59900ef181be949c7cb33f4f9
3
+ size 286155
data/sample000012_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ae8b433859f44539a170116f3704e19a53fdab7385732e395cf7f297beaa38e
3
+ size 331822
data/sample000012_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82e2cdca01a31d816c6b3edb15b1a7798ca776355f08bf01ef08079bfc467ffb
3
+ size 18620
data/sample000012_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5927008efa92be0c179ae0990d366c49f5a3ce8177018b23f73bdb1ca59ce105
3
+ size 283535
data/sample000013_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5347e29bb5ac880e5672dce30a3be780cef97679ed81153d7bba80582478e86
3
+ size 779411
data/sample000013_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a9301ae60f1c66dc9a512c423a8b2b8b4a57b72ce6b3badc53deaf01f92c719
3
+ size 59481
data/sample000013_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82a5db6a8e659300b9bd50eb684c48d48366a178e00a5f1117f2ad29090214ba
3
+ size 512151
data/sample000014_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bc4ff6a441b69e385ce179f859184159560b918a0b8a2e51d2a492d6c2a0a55
3
+ size 643930
data/sample000014_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c4a5e19ff155448eefd061ea1882bbf7e88b1f8273ac8161dab8751ce8ed1d4
3
+ size 275995
data/sample000014_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67f2575f7f6cab570c2c7be9d2b649a990ea62f817892a3b07ad9a97cacfe649
3
+ size 316931
data/sample000015_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b29d37ad7c003959a49004e8e3241e67b56610eb1a797a2b6c40b728caafc5f9
3
+ size 1286877
data/sample000015_src_mask.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25cc720adab78447928bf7899890ee53841f9895c6fdb0c536627841f64cc3e2
3
+ size 102221
data/sample000015_src_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a2a432ccebfbbd8ac2f5525ce835f13951adba076a700f9c722d6527a985d19
3
+ size 1071147
data/sample000016_raw_video.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18e48bfbd2c504e1e18b72b2e1f3f97bea3e1077f38b906d8e97225cee79aebd
3
+ size 1359723