Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
sayakpaul HF staff commited on
Commit
5898e17
1 Parent(s): 7d65b61

add: detailed dataset card.

Browse files
Files changed (1) hide show
  1. README.md +162 -2
README.md CHANGED
@@ -42,6 +42,7 @@ dataset_info:
42
  - [Data Instances](#data-instances)
43
  - [Data Fields](#data-fields)
44
  - [Data Splits](#data-splits)
 
45
  - [Dataset Creation](#dataset-creation)
46
  - [Curation Rationale](#curation-rationale)
47
  - [Source Data](#source-data)
@@ -62,7 +63,7 @@ dataset_info:
62
 
63
  - **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
64
  - **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2).
65
- - **Paper:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273)
66
  - **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu)
67
 
68
  ### Dataset Summary
@@ -84,4 +85,163 @@ The dataset has several components:
84
 
85
  ### Supported Tasks
86
 
87
- - `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  - [Data Instances](#data-instances)
43
  - [Data Fields](#data-fields)
44
  - [Data Splits](#data-splits)
45
+ - [Visualization](#visualization)
46
  - [Dataset Creation](#dataset-creation)
47
  - [Curation Rationale](#curation-rationale)
48
  - [Source Data](#source-data)
 
63
 
64
  - **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
65
  - **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2).
66
+ - **Papers:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273)
67
  - **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu)
68
 
69
  ### Dataset Summary
 
85
 
86
  ### Supported Tasks
87
 
88
+ - `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera.
89
+
90
+
91
+ ### Languages
92
+
93
+ English.
94
+
95
+ ## Dataset Structure
96
+
97
+ ### Data Instances
98
+
99
+ A data point comprises an image and its annotation depth map for both the `train` and `validation` splits.
100
+
101
+ ```
102
+ {
103
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB at 0x1FF32A3EDA0>,
104
+ 'depth_map': <PIL.PngImagePlugin.PngImageFile image mode=L at 0x1FF32E5B978>,
105
+ }
106
+ ```
107
+
108
+ ### Data Fields
109
+
110
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
111
+ - `depth_map`: A `PIL.Image.Image` object containing the annotation depth map.
112
+
113
+ ### Data Splits
114
+
115
+ The data is split into training, and validation splits. The training data contains 47584 images, and the validation data contains 654 images.
116
+
117
+ ## Visualization
118
+
119
+ You can use the following code snippet to visualize samples from the dataset:
120
+
121
+ ```py
122
+ from datasets import load_dataset
123
+ import numpy as np
124
+ import matplotlib.pyplot as plt
125
+
126
+
127
+ cmap = plt.cm.viridis
128
+
129
+ ds = load_dataset("sayakpaul/nyu_depth_v2")
130
+
131
+
132
+ def colored_depthmap(depth, d_min=None, d_max=None):
133
+ if d_min is None:
134
+ d_min = np.min(depth)
135
+ if d_max is None:
136
+ d_max = np.max(depth)
137
+ depth_relative = (depth - d_min) / (d_max - d_min)
138
+ return 255 * cmap(depth_relative)[:,:,:3] # H, W, C
139
+
140
+
141
+ def merge_into_row(input, depth_target):
142
+ input = np.array(input)
143
+ depth_target = np.squeeze(np.array(depth_target))
144
+
145
+ d_min = np.min(depth_target)
146
+ d_max = np.max(depth_target)
147
+ depth_target_col = colored_depthmap(depth_target, d_min, d_max)
148
+ img_merge = np.hstack([input, depth_target_col])
149
+
150
+ return img_merge
151
+
152
+
153
+ random_indices = np.random.choice(len(ds["train"]), 9).tolist()
154
+ train_set = ds["train"]
155
+
156
+ plt.figure(figsize=(15, 6))
157
+
158
+ for i, idx in enumerate(random_indices):
159
+ ax = plt.subplot(3, 3, i + 1)
160
+ image_viz = merge_into_row(
161
+ train_set[idx]["image"], train_set[idx]["depth_map"]
162
+ )
163
+ plt.imshow(image_viz.astype("uint8"))
164
+ plt.axis("off")
165
+ ```
166
+
167
+ ## Dataset Creation
168
+
169
+ ### Curation Rationale
170
+
171
+ The rationale from [the paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) that introduced the NYU Depth V2 dataset:
172
+
173
+ > We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation.
174
+
175
+ ### Source Data
176
+
177
+ #### Initial Data Collection
178
+
179
+ > The dataset consists of 1449 RGBD images, gathered from a wide range
180
+ of commercial and residential buildings in three different US cities, comprising
181
+ 464 different indoor scenes across 26 scene classes.A dense per-pixel labeling was
182
+ obtained for each image using Amazon Mechanical Turk.
183
+
184
+ #### Who are the source language producers?
185
+
186
+ [TODO]
187
+
188
+ ### Annotations
189
+
190
+ #### Annotation process
191
+
192
+ This is an involved process. Interested readers are referred to Sections 2, 3, and 4 of the [original paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf).
193
+
194
+ #### Who are the annotators?
195
+
196
+ AMT annotators.
197
+
198
+ ### Personal and Sensitive Information
199
+
200
+ [More Information Needed]
201
+
202
+ ## Considerations for Using the Data
203
+
204
+ ### Social Impact of Dataset
205
+
206
+ [More Information Needed]
207
+
208
+ ### Discussion of Biases
209
+
210
+ [More Information Needed]
211
+
212
+ ### Other Known Limitations
213
+
214
+ [More Information Needed]
215
+
216
+ ## Additional Information
217
+
218
+ ### Dataset Curators
219
+
220
+ * Original NYU Depth V2 dataset: Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus
221
+ * Preprocessed version: Diana Wofk, Fangchang Ma, Tien-Ju Yang, Sertac Karaman, Vivienne Sze
222
+
223
+ ### Licensing Information
224
+
225
+ The preprocessed NYU Depth V2 dataset is licensed under a [MIT License](https://github.com/dwofk/fast-depth/blob/master/LICENSE).
226
+
227
+ ### Citation Information
228
+
229
+ ```bibtex
230
+ @inproceedings{Silberman:ECCV12,
231
+ author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
232
+ title = {Indoor Segmentation and Support Inference from RGBD Images},
233
+ booktitle = {ECCV},
234
+ year = {2012}
235
+ }
236
+
237
+ @inproceedings{icra_2019_fastdepth,
238
+ author = {{Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne}},
239
+ title = {{FastDepth: Fast Monocular Depth Estimation on Embedded Systems}},
240
+ booktitle = {{IEEE International Conference on Robotics and Automation (ICRA)}},
241
+ year = {{2019}}
242
+ }
243
+ ```
244
+
245
+ ### Contributions
246
+
247
+ Thanks to [@sayakpaul](https://huggingface.co/sayakpaul) for adding this dataset.