license: apache-2.0
task_categories:
- text-to-3d
- image-to-3d
language:
- en
tags:
- 4d
- 3d
- text-to-4d
- image-to-4d
size_categories:
- 1M<n<10M
Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models
[Project Page] | [Code] |
News
- 2024.5.27: Released metadata for objects!
Overview
We collect a large-scale, high-quality dynamic 3D(4D) dataset sourced from the vast 3D data corpus of Objaverse-1.0 and Objaverse-XL. We apply a series of empirical rules to filter the dataset. You can find more details in our paper. In this part, we will release the selected 4D assets, including:
- Selected high-quality 4D object ID.
- A render script using Blender, providing optional settings to render your personalized data.
- (To be uploaded) Rendered 4D images by our team to save your GPU time.
4D Dataset ID/Metadata
We collect 365k dynamic 3D assets from Objaverse-1.0 (42k) and Objaverse-xl (323k). We curate a high-quality subset to train our models. With objaverse-1.0, we provide the selected 11K ids in rendering/src/ObjV1_curated.txt
. Uncurated 42k IDs of all the animated objects from objaverse-1.0 are in rendering/src/ObjV1_all_animated.txt
.
Metadata of animated objects (323k) from objaverse-xl can be found in meta_xl_animation_tot.csv. We also release the metadata of all successfully rendered objects from objaverse-xl's Github subset in meta_xl_tot.csv.
For text-to-4D generation, the captions are obtained from the work Cap3D. More about the dataset and curation scripts are coming soon!
Citation
If you find this repository/work/dataset helpful in your research, please consider citing the paper and starring the repo ⭐.
@article{liang2024diffusion4d,
title={Diffusion4D: Fast Spatial-temporal Consistent
4D Generation via Video Diffusion Models},
author={},
journal={arXiv preprint arXiv:},
year={2024}
}