Datasets:

DOI:
License:
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    DataFilesNotFoundError
Message:      No (supported) data files found in COLE-Ricoh/ToF-360
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 72, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1904, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1270, in get_module
                  module_name, default_builder_kwargs = infer_module_for_data_files(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 597, in infer_module_for_data_files
                  raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else ""))
              datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in COLE-Ricoh/ToF-360

Need help to make the dataset viewer work? Open a discussion for direct support.

ToF-360 Dataset

Figure showing multiple modalities

Overview

The ToF-360 dataset consists of spherical RGB-D images with instance-level semantic and room layout annotations, which include 4 unique scenes. It contains 179 equirectangular RGB images along with the corresponding depths, surface normals, XYZ images, and HHA images, labeled with building-defining object categories and image based layout boundaries (ceiling-wall, wall-floor). The dataset enables development of scene understanding tasks based on single-shot reconstruction without the need for global alignment in indoor spaces.

Dataset Modalities

Each scenes in the dataset has its own folder in the dataset. All the modalities for each area are contained in that folder as <scene>/<modality>.

RGB images:
RGB images contain equirectangular 24-bit color and it is converted from raw dual fisheye image taken by a sensor.

Manhattan aligned RGB images:
We followed the preprocessing code proposed by [LGT-Net] to create Manhattan aligned RGB images. Sample code for our dataset is in assets/preprocessing/align_manhattan.py.

depth:
Depth images are stored as 16-bit grayscale PNGs having a maximum depth of 128m and a sensitivity of 1/512m. Missing values are encoded with the value 0. Note that while depth is defined as the distance from the point-center of the camera in the panoramics.

XYZ images:
XYZ images are saved as .npy binary file format in NumPy. It contains pixel-aligned set of data points in space with a sensitivity of mm. It must be the size of (Height, Width, 3[xyz]).

Normal images:
Normals are 127.5-centered per-channel surface normal images. The normal vector is saved as 24-bit RGB PNGs where Red is the horizontal value (more red to the right), Green is vertical (more green downwards), and Blue is towards the camera. It is computed by normal estimation function in Open3D. The tool for creating normal images from 3D is located in the assets/preprocessing/depth2normal.py.

HHA images:
HHA images contains horizontal disparity, height above ground and angle with gravity, respectively. We followed Depth2HHA-python to create it. Code is located in assets/preprocessing/getHHA.py.

Annotation:
We used the COCO Annotator for labelling the RGB data. We follow ontology-based annotation guidelines developed for both RGB-D and point cloud data.
<scenes>/annotation contains json format files, <scenes>/semantics and <scenes>/instances> have image-like labeled data stored as .npy binary file.

Room layout annotation:
Room layout annotations are stored as same json format as PanoAnnotator. Please refer to this repo for more details.

Tools

This repository provides some basic tools for getting preprocessed data and evaluating dataset. The tools are located in the assets/ folder.

Croissant metadata

You can use this instruction provided by HuggingFace. croissant_metadata.json is also available.

Citations

Coming soon...

Downloads last month
2