The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
__key__
string | __url__
string | npy
sequence |
---|---|---|
Real/camera_ready/seq001_transformer/masks/00098 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00188 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00050 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00105 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00178 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00002 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00085 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00195 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00020 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Real/camera_ready/seq001_transformer/masks/00301 | "hf://datasets/IPEC-COMMUNITY/LiveScene@2f1963ee7457046591eb6559cd009d4744d4a3e9/InterReal/seq001_tr(...TRUNCATED) | [[[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0,1],[0(...TRUNCATED) |
Dataset Card for LiveScene
Dataset Description
The dataset consists of two parts: the InterReal dataset, which was captured using the Polycam app on an iPhone 15 Pro, and the OmniSim dataset created with the OmniGibson simulator. In total, the dataset provides 28 interactive subsets, containing 2 million samples across various modalities, including RGB, depth, segmentation, camera trajectories, interaction variables, and object captions. This comprehensive dataset supports a range of tasks involving real-world and simulated environments.
Dataset Sources
Uses
Direct Use
To download the entire dataset, follow these steps:
git lfs install
git clone https://huggingface.co/datasets/IPEC-COMMUNITY/LiveScene
# Merge the parts (if necessary)
cat {scene_name}_part_* > {scene_name}.tar.gz
tar -xzf {scene_name}.tar.gz
If you only want to download a specific subset, use the following code:
from huggingface_hub import hf_hub_download
hf_hub_download(
repo_id="IPEC-COMMUNITY/LiveScene",
filename="OmniSim/{scene_name}.tar.gz",
repo_type="dataset",
local_dir=".",
)
After downloading, you can extract the subset using:
tar -xzf {scene_name}.tar.gz
Dataset Structure
.
|-- InterReal
`-- {scene_name}.tar.gz
|-- depth
| `-- xxx.npy
|-- images
| `-- xxx.jpg
|-- images_2
|-- images_4
|-- images_8
|-- masks
| `-- xxx.npy
|-- key_frame_value.yaml
|-- mapping.yaml
`-- transforms.json
|-- OmniSim
`-- {scene_name}.tar.gz
|-- depth
| `-- xxx.npy
|-- images
| `-- xxx.png
|-- mask
| `-- xxx.npy
|-- key_frame_value.yaml
|-- mapping.yaml
`-- transforms.json
Dataset Creation
Curation Rationale
To our knowledge, existing view synthetic datasets for interactive scene rendering are primarily limited to a few interactive objects due to necessitating a substantial amount of manual annotation of object masks and states, making it impractical to scale up to real scenarios involving multi-object interactions. To bridge this gap, we construct two scene-level, high-quality annotated datasets to advance research progress in reconstructing and understanding interactive scenes: OmniSim and InterReal.
Data Collection and Processing
Scene Assets and Generation Pipeline for OmniSim
We generate the synthetic dataset using the OmniGibson simulator. The dataset consists of 20 interactive scenes from 7 scene models: #rs, #ihlen, #beechwood, #merom, #pomaria, #wainscott, and #benevolence. The scenes feature various interactive objects, including cabinets, refrigerators, doors, drawers, and more, each with different hinge joints.
We configure the simulator camera with an intrinsic parameter set of focal length 8, aperture 20, and a resolution of 1024 × 1024. By varying the rotation vectors for each joint of the articulated objects, we can observe different motion states of various objects. We generated 20 high-definition subsets, each consisting of RGB images, depth, camera trajectory, interactive object masks, and corresponding object state quantities relative to their "closed" state at each time step, from multiple camera trajectories and viewpoints.
The data is obtained through the following steps:
- The scene model is loaded, and the respective objects are selected, with motion trajectories set for each joint.
- Keyframes are set for camera movement in the scene, and smooth trajectories are obtained through interpolation.
- The simulator is then initiated, and the information captured by the camera at each moment is recorded.
Scene Assets and Generation Pipeline for InterReal
InterReal is primarily captured using the Polycam app on an Apple iPhone 15 Pro. We selected 8 everyday scenes and placed various interactive objects within each scene, including transformers, laptops, microwaves, and more. We recorded 8 videos, each at a frame rate of 5FPS, capturing 700 to 1000 frames per video.
The dataset was processed via the following steps:
- manual object movement and keyframe capture
- OBJ file export and pose optimization using Polycam
- conversion to a dataset containing RGB images and transformation matrices using Nerfstudio
- mask generation for each object in each scene using SAM and corresponding prompts and state quantity labeling for certain keyframes.
Citation
If you find our work useful, please consider citing us!
@article{livescene2024,
title = {LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Rendering and Control},
author = {Delin Qu, Qizhi Chen, Pingrui Zhang, Xianqiang Gao, Bin Zhao, Zhigang Wang, Dong Wang, Xuelong Li},
year = {2024},
journal = {arXiv preprint arXiv:2406.16038}
}
- Downloads last month
- 59