Datasets:

Size Categories:
n<1K
ArXiv:
License:
The Dataset Viewer has been disabled on this dataset.

Robust e-NeRF Synthetic Event Dataset

Project Page arXiv Code Simulator

Easy Medium Hard

This repository contains the synthetic event dataset used in Robust e-NeRF to study the collective effect of camera speed profile, contrast threshold variation and refractory period on the quality of NeRF reconstruction from a moving event camera. The dataset is simulated using an improved version of ESIM with three different camera configurations of increasing difficulty levels (i.e. easy, medium and hard) on seven Realistic Synthetic 360 scenes (adopted in the synthetic experiments of NeRF), resulting in a total of 21 sequence recordings. Please refer to the Robust e-NeRF paper for more details.

This synthetic event dataset allows for a retrospective comparison between event-based and image-based NeRF reconstruction methods, as the event sequences were simulated under highly similar conditions as the NeRF synthetic dataset. In particular, we adopt the same camera intrinsics and camera distance to the object at the origin. Furthermore, the event camera travels in a hemi-/spherical spiral motion about the object, thereby having a similar camera pose distribution for training. Apart from that, we also use the same test camera poses/views. Nonetheless, this new synthetic event dataset is not only specific to NeRF reconstruction, but also suitable for novel view synthesis, 3D reconstruction, localization and SLAM in general.

If you use this synthetic event dataset for your work, please cite:

@inproceedings{low2023_robust-e-nerf,
  title = {Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion},
  author = {Low, Weng Fei and Lee, Gim Hee},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year = {2023}
}

Dataset Structure and Contents

This synthetic event dataset is organized first by scene, then by level of difficulty. Each sequence recording is given in the form of a ROS bag named esim.bag, with the following data streams:

ROS Topic Data Publishing Rate (Hz)
/cam0/events Events -
/cam0/pose Camera Pose 1000
/imu IMU measurements with simulated noise 1000
/cam0/image_raw RGB image 250
/cam0/depthmap Depth map 10
/cam0/optic_flow Optical flow map 10
/cam0/camera_info Camera intrinsic and lens distortion parameters 10

It is obtained by running the improved ESIM with the associated esim.conf configuration file, which references camera intrinsics configuration files pinhole_mono_nodistort_f={1111, 1250}.yaml and camera trajectory CSV files {hemisphere, sphere}_spiral-rev=4[...].csv.

The validation and test views of each scene are given in the views/ folder, which is structured according to the NeRF synthetic dataset (except for the depth and normal maps). These views are rendered from the scene Blend-files, given in the scenes/ folder. Specifically, we create a Conda environment with Blender as a Python module installed, according to these instructions, to run the bpy_render_views.py Python script for rendering the evaluation views.

Setup

  1. Install Git LFS according to the official instructions.
  2. Setup Git LFS for your user account with:
    git lfs install
    
  3. Clone this dataset repository into the desired destination directory with:
    git lfs clone https://huggingface.co/datasets/wengflow/robust-e-nerf
    
  4. To minimize disk usage, remove the .git/ folder. However, this would complicate the pulling of changes in this upstream dataset repository.
Downloads last month
0