Datasets:

DOI:
License:
File size: 3,687 Bytes
48d9541
 
 
6b23c55
 
 
 
 
 
324d4da
6b23c55
 
324d4da
6b23c55
 
324d4da
6b23c55
 
324d4da
6b23c55
 
324d4da
6b23c55
324d4da
 
6b23c55
 
324d4da
6b23c55
324d4da
 
 
6b23c55
324d4da
 
 
6b23c55
324d4da
 
6b23c55
324d4da
 
6b23c55
324d4da
 
6b23c55
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---

license: cc-by-nc-sa-4.0
---


# ToF-360 Dataset  

![Figure showing multiple modalities](assets/figure/figure_1.png?raw=true)

## Overview  
The ToF-360 dataset consists of spherical RGB-D images with instance-level semantic and room layout annotations, which include 4 unique scenes. It contains 179 equirectangular RGB images along with the corresponding depths, surface normals, XYZ images, and HHA images, labeled with building-defining object categories and image based layout boundaries (ceiling-wall, wall-floor). The dataset enables development of scene understanding tasks based on single-shot reconstruction without the need for global alignment in indoor spaces.

## Dataset Modalities  
Each scenes in the dataset has its own folder in the dataset. All the modalities for each area are contained in that folder as `<scene>/<modality>`.

**RGB images:**  
RGB images contain equirectangular 24-bit color and it is converted from raw dual fisheye image taken by a sensor.

**Manhattan aligned RGB images:**  
We followed the preprocessing code proposed by [[LGT-Net]](https://github.com/zhigangjiang/LGT-Net) to create Manhattan aligned RGB images. Sample code for our dataset is in `assets/preprocessing/align_manhattan.py`.

**depth:**  
Depth images are stored as 16-bit grayscale PNGs having a maximum depth of 128m and a sensitivity of 1/512m. Missing values are encoded with the value 0. Note that while depth is defined as the distance from the point-center of the camera in the panoramics. 

**XYZ images:**  
XYZ images are saved as `.npy` binary file format in [NumPy](https://numpy.org/). It contains pixel-aligned set of data points in space with a sensitivity of mm. It must be the size of (Height, Width, 3[xyz]).

**Normal images:**  
Normals are 127.5-centered per-channel surface normal images. The normal vector is saved as 24-bit RGB PNGs where Red is the horizontal value (more red to the right), Green is vertical (more green downwards), and Blue is towards the camera. It is computed by [normal estimation function](https://www.open3d.org/docs/0.7.0/python_api/open3d.geometry.estimate_normals.html) in [Open3D](https://github.com/isl-org/Open3D). The tool for creating normal images from 3D is located in the `assets/preprocessing/depth2normal.py`.

**HHA images:**  
HHA images contains horizontal disparity, height above ground and angle with gravity, respectively.
We followed [Depth2HHA-python](https://github.com/charlesCXK/Depth2HHA-python) to create it. Code is located in `assets/preprocessing/getHHA.py`.

**Annotation:**  
We used the [COCO Annotator](https://github.com/jsbroks/coco-annotator) for labelling the RGB data. We follow [ontology-based annotation guidelines](https://www.dfki.de/fileadmin/user_upload/import/13246_EC3_2023_Ontology_based_annotation_of_RGB_D_images_and_point_clouds_for_a_domain_adapted_dataset.pdf) developed for both RGB-D and point cloud data.  
`<scenes>/annotation` contains json format files, `<scenes>/semantics` and `<scenes>/instances>` have image-like labeled data stored as `.npy` binary file.

**Room layout annotation:**  
Room layout annotations are stored as same json format as [PanoAnnotator](https://github.com/SunDaDenny/PanoAnnotator). Please refer to this repo for more details.

## Tools  
This repository provides some basic tools for getting preprocessed data and evaluating dataset. The tools are located in the `assets/` folder.

## Croissant metadata
You can use [this instruction](https://huggingface.co/docs/datasets-server/croissant) provided by HuggingFace. `croissant_metadata.json` is also available.

## Citations  
Coming soon...