|
--- |
|
language: |
|
- en |
|
license: apache-2.0 |
|
--- |
|
# 360°-Motion Dataset |
|
|
|
[Project page](http://fuxiao0719.github.io/projects/3dtrajmaster) | [Paper](https://drive.google.com/file/d/111Z5CMJZupkmg-xWpV4Tl4Nb7SRFcoWx/view) | [Code](https://github.com/kwaiVGI/3DTrajMaster) |
|
|
|
### Acknowledgments |
|
We thank Jinwen Cao, Yisong Guo, Haowen Ji, Jichao Wang, and Yi Wang from Kuaishou Technology for their help in constructing our 360°-Motion Dataset. |
|
|
|
![image/png](imgs/dataset.png) |
|
|
|
### News |
|
- [2024-12] We release the V1 dataset (72,000 videos consists of 50 entities, 6 UE scenes, and 121 trajectory templates). |
|
|
|
### Data structure |
|
|
|
``` |
|
├── 360Motion-Dataset Video Number Cam-Obj Distance (m) |
|
├── 480_720/384_672 |
|
├── Desert (desert) 18,000 [3.06, 13.39] |
|
├── location_data.json |
|
├── HDRI |
|
├── loc1 (snowy street) 3,600 [3.43, 13.02] |
|
├── loc2 (park) 3,600 [4.16, 12.22] |
|
├── loc3 (indoor open space) 3,600 [3.62, 12.79] |
|
├── loc11 (gymnastics room) 3,600 [4.06, 12.32] |
|
├── loc13 (autumn forest) 3,600 [4.49 11.91] |
|
├── location_data.json |
|
├── RefPic |
|
├── CharacterInfo.json |
|
├── Hemi12_transforms.json |
|
``` |
|
|
|
**(1) Released Dataset Information** |
|
|
|
| Argument | Description |Argument | Description | |
|
|-------------------------|-------------|-------------------------|-------------| |
|
| **Video Resolution** | (1) 480×720 (2) 384×672 | **Frames/Duration/FPS** | 99/3.3s/30 | |
|
| **UE Scenes** | 6 (1 desert+5 HDRIs) | **Video Samples** | (1) 36,000 (2) 36,000 | |
|
| **Camera Intrinsics (fx,fy)** | (1) 1060.606 (2) 989.899 | **Sensor Width/Height (mm)** | (1) 23.76/15.84 (2) 23.76/13.365 | |
|
| **Hemi12_transforms.json** | 12 surrounding cameras | **CharacterInfo.json** | entity prompts | |
|
| **RefPic** | 50 animals | **1/2/3 Trajectory Templates** | 36/60/35 (121 in total) | |
|
| **{D/N}_{locX}** | {Day/Night}_{LocationX} | **{C}_ {XX}_{35mm}** | {Close-Up Shot}_{Cam. Index(1-12)} _{Focal Length}| |
|
|
|
**Note that** the resolution of 384×672 refers to our internal video diffusion resolution. In fact, we render the video at a resolution of 378×672 (aspect ratio 9:16), with a 3-pixel black border added to both the top and bottom. |
|
|
|
**(2) Difference with the Dataset to Train on Our Internal Video Diffusion Model** |
|
|
|
The release of the full dataset regarding more entities and UE scenes is 1) still under our internal license check, 2) awaiting the paper decision. |
|
|
|
| Argument | Released Dataset | Our Internal Dataset| |
|
|-------------------------|-------------|-------------------------| |
|
| **Video Resolution** | (1) 480×720 (2) 384×672 | 384×672 | |
|
| **Entities** | 50 (all animals) | 70 (20 humans+50 animals) | |
|
| **Video Samples** | (1) 36,000 (2) 36,000 | 54,000 | |
|
| **Scenes** | 6 | 9 (+city, forest, asian town) | |
|
| **Trajectory Templates** | 121 | 96 | |
|
|
|
**(3) Load Dataset Sample** |
|
|
|
1. Change root path to `dataset`. We provide a script to load our dataset (video & entity & pose sequence) as follows. It will generate the sampled video for visualization in the same folder path. |
|
|
|
```bash |
|
python load_dataset.py |
|
``` |
|
|
|
2. Visualize the 6DoF pose sequence via Open3D as follows. |
|
|
|
```bash |
|
python vis_trajecotry.py |
|
``` |
|
After running the visualization script, you will get an interactive window like this. |
|
|
|
<img src="imgs/vis_objstraj.png" width="350" /> |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{fu20243dtrajmaster, |
|
author = {Fu, Xiao and Liu, Xian and Wang, Xintao and Peng, Sida and Xia, Menghan and Shi, Xiaoyu and Yuan, Ziyang and Wan, Pengfei and Zhang, Di and Lin, Dahua}, |
|
title = {3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation}, |
|
journal = {arXiv preprint arXiv:2412.07759}, |
|
year = {2024} |
|
} |
|
``` |
|
|
|
## Contact |
|
|
|
Xiao Fu: lemonaddie0909@gmail.com |