Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Dataset Card for EN-SLAM (Implicit Event-RGBD Neural SLAM, CVPR24)

Dataset Description

This repository contains the dataset for the paper Implicit Event-RGBD Neural SLAM, the first event-RGBD implicit neural SLAM framework that efficiently leverages event stream and RGBD to overcome challenges in extreme motion blur and lighting variation scenes. DEV-Indoors is obtained through Blender [6] and simulator [14], covering normal, motion blur, and dark scenes, providing 9 subsets with RGB images, depth maps, event streams, meshes, and trajectories. DEV-Reals is captured from real scenes, providing 8 challenging subsets under motion blur and lighting variation.

Dataset Sources

Update

  • Release DEV-Indoors and DEV-Reals Dataset.
  • Add Dataset Usage Instruction.

Usage

  • Download and Extract (export HF_ENDPOINT=https://hf-mirror.com would be helpful if you are blocked)
huggingface-cli download --resume-download --local-dir-use-symlinks False delinqu/EN-SLAM-Dataset --local-dir EN-SLAM-Dataset 

# Alternatively, you can use git clone the repo
git lfs install
git clone https://huggingface.co/datasets/delinqu/EN-SLAM-Dataset

If you only want to download a specific subset, use the following code:

from huggingface_hub import hf_hub_download

hf_hub_download(
    repo_id="delinqu/EN-SLAM-Dataset",
    filename="DEV-Indoors_config.tar.gz",
    repo_type="dataset",
    local_dir=".",
)

After downloading, you can use the following script to extract the tar.gz, under the project root dir. The python script just simple unzip all the tar.gz files, feel free to customise:

python scripts/extract_dataset.py

The extracted Dataset will be in the following structure:

  • Use a Dataloader

Please refer to datasets/dataset.py for dataloader of DEVIndoors and DEVReals.

  • Evaluation

To construct the evaluation subsets, we use frustum + occlusion + virtual cameras that introduce extra virtual views to cover the occluded parts inside the region of interest in CoSLAM. The evaluation datasets are generated by randomly conducting 2000 poses and depths in Blender for each scene. We further manually add extra virtual views to cover all scenes. This process helps to evaluate the view synthesis and hole-filling capabilities of the algorithm. Please follow the neural_slam_eval with our groundtruth pointclouds and images.

Dataset Format

DEV-Indoors Dataset

  • data structure
β”œβ”€β”€ groundtruth # evaluation metadata: pose, rgb, depth, mesh
β”‚   β”œβ”€β”€ apartment
β”‚   β”œβ”€β”€ room
β”‚   └── workshop
β”œβ”€β”€ seq001_room_norm # normal sequence: event, rgb, depth, pose, camera_para
β”‚   β”œβ”€β”€ camera_para.txt
β”‚   β”œβ”€β”€ depth
β”‚   β”œβ”€β”€ depth_mm
β”‚   β”œβ”€β”€ event.zip
β”‚   β”œβ”€β”€ pose
β”‚   β”œβ”€β”€ rgb
β”‚   β”œβ”€β”€ timestamps.txt
β”‚   └── seq001_room_norm.yaml
β”œβ”€β”€ seq002_room_blur # blur sequence: event, rgb, depth, pose, camera_para
β”‚   β”œβ”€β”€ depth
β”‚   β”œβ”€β”€ depth_mm
β”‚   β”œβ”€β”€ event.zip
β”‚   β”œβ”€β”€ pose
β”‚   β”œβ”€β”€ rgb
β”‚   β”œβ”€β”€ timestamps.txt
β”‚   └── seq002_room_blur.yaml
β”œβ”€β”€ seq003_room_dark # dark sequence: event, rgb, depth, pose, camera_para
β”‚   β”œβ”€β”€ depth
β”‚   β”œβ”€β”€ depth_mm
β”‚   β”œβ”€β”€ event.zip
β”‚   β”œβ”€β”€ pose
β”‚   β”œβ”€β”€ rgb
β”‚   β”œβ”€β”€ timestamps.txt
β”‚   └── seq003_room_dark.yaml
...
└── seq009_workshop_dark
    β”œβ”€β”€ depth
    β”œβ”€β”€ depth_mm
    β”œβ”€β”€ event.zip
    β”œβ”€β”€ pose
    β”œβ”€β”€ rgb
    β”œβ”€β”€ timestamps.txt
    └── seq009_workshop_dark.yaml
  • model: 3D model of the room, apartment, and workshop scene

model
β”œβ”€β”€ apartment
β”‚   β”œβ”€β”€ apartment.blend
β”‚   β”œβ”€β”€ hdri
β”‚   β”œβ”€β”€ room.blend
β”‚   β”œβ”€β”€ supp
β”‚   └── Textures
└── workshop
    β”œβ”€β”€ hdri
    β”œβ”€β”€ Textures
    └── workshop.blend
  • scripts: scripts for data generation and visulization.
scripts
β”œβ”€β”€ camera_intrinsic.py # blender camera intrinsic generation tool.
β”œβ”€β”€ camera_pose.py # blender camera pose generation tool.
β”œβ”€β”€ npzs_to_frame.py # convert npz to frame.
β”œβ”€β”€ read_ev.py # read event data.
└── viz_ev_frame.py # visualize event and frame.

DEV-Reals Dataset

DEV-Reals
β”œβ”€β”€ devreals.yaml # dataset metadata: camera parameters, cam2davis transformation matrix
|
β”œβ”€β”€ enslamdata1 # sequence: davis346, pose, rgbd
β”‚   β”œβ”€β”€ davis346
β”‚   β”œβ”€β”€ pose
β”‚   └── rgbd
β”œβ”€β”€ enslamdata1.bag
β”œβ”€β”€ enslamdata2
β”‚   β”œβ”€β”€ davis346
β”‚   β”œβ”€β”€ pose
β”‚   └── rgbd
β”œβ”€β”€ enslamdata2.bag
β”œβ”€β”€ enslamdata3
β”‚   β”œβ”€β”€ davis346
β”‚   β”œβ”€β”€ pose
β”‚   └── rgbd
β”œβ”€β”€ enslamdata3.bag
...
β”œβ”€β”€ enslamdata8
β”‚   β”œβ”€β”€ davis346
β”‚   β”œβ”€β”€ pose
β”‚   └── rgbd
└── enslamdata8.bag

Citation

If you use this work or find it helpful, please consider citing:

@inproceedings{qu2023implicit,
    title={Implicit Event-RGBD Neural SLAM},
    author={Delin Qu, Chi Yan, Dong Wang, Jie Yin, Qizhi Chen, Yiting Zhang, Dan Xu and Bin Zhao and Xuelong Li},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2024}
}
Downloads last month
417