Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
sayakpaul HF staff commited on
Commit
495042a
1 Parent(s): 5e193c3

Dataset card (#7)

Browse files

- add: dataset card (partial). (03b10229bdfe1a09ecc4975b0e15300418912504)
- fix: task_ids. (d5f4f17971b3d0012b10e7d7b36a8e4bf893f6c3)
- fix: task_ids. (7d65b61acf0d54d3bba945016c62a6ec7e7434de)
- add: detailed dataset card. (5898e175992804fba5098fc0094e3abe068057dd)
- chore: apply pr suggestions. (47871773780f2da1465962a02e1cd84a93c1ac5c)
- chore: add note about other tasks. (338f1c92bbd257b741a45b38d9148048bbc5e3ac)

Files changed (1) hide show
  1. README.md +229 -1
README.md CHANGED
@@ -1,5 +1,18 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  dataset_info:
4
  features:
5
  - name: image
@@ -15,4 +28,219 @@ dataset_info:
15
  num_examples: 654
16
  download_size: 35151124480
17
  dataset_size: 20452883313
18
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ multilinguality:
6
+ - monolingual
7
+ size_categories:
8
+ - 10K<n<100K
9
+ task_categories:
10
+ - depth-estimation
11
+ task_ids: []
12
+ pretty_name: NYU Depth V2
13
+ tags:
14
+ - depth-estimation
15
+ paperswithcode_id: nyuv2
16
  dataset_info:
17
  features:
18
  - name: image
 
28
  num_examples: 654
29
  download_size: 35151124480
30
  dataset_size: 20452883313
31
+ ---
32
+
33
+ # Dataset Card for MIT Scene Parsing Benchmark
34
+
35
+ ## Table of Contents
36
+ - [Table of Contents](#table-of-contents)
37
+ - [Dataset Description](#dataset-description)
38
+ - [Dataset Summary](#dataset-summary)
39
+ - [Supported Tasks](#supported-tasks)
40
+ - [Languages](#languages)
41
+ - [Dataset Structure](#dataset-structure)
42
+ - [Data Instances](#data-instances)
43
+ - [Data Fields](#data-fields)
44
+ - [Data Splits](#data-splits)
45
+ - [Visualization](#visualization)
46
+ - [Dataset Creation](#dataset-creation)
47
+ - [Curation Rationale](#curation-rationale)
48
+ - [Source Data](#source-data)
49
+ - [Annotations](#annotations)
50
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
51
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
52
+ - [Social Impact of Dataset](#social-impact-of-dataset)
53
+ - [Discussion of Biases](#discussion-of-biases)
54
+ - [Other Known Limitations](#other-known-limitations)
55
+ - [Additional Information](#additional-information)
56
+ - [Dataset Curators](#dataset-curators)
57
+ - [Licensing Information](#licensing-information)
58
+ - [Citation Information](#citation-information)
59
+ - [Contributions](#contributions)
60
+
61
+
62
+ ## Dataset Description
63
+
64
+ - **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
65
+ - **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2).
66
+ - **Papers:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273)
67
+ - **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu)
68
+
69
+ ### Dataset Summary
70
+
71
+ As per the [dataset homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html):
72
+
73
+ The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft [Kinect](http://www.xbox.com/kinect). It features:
74
+
75
+ * 1449 densely labeled pairs of aligned RGB and depth images
76
+ * 464 new scenes taken from 3 cities
77
+ * 407,024 new unlabeled frames
78
+ * Each object is labeled with a class and an instance number (cup1, cup2, cup3, etc)
79
+
80
+ The dataset has several components:
81
+
82
+ * Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels.
83
+ * Raw: The raw rgb, depth and accelerometer data as provided by the Kinect.
84
+ * Toolbox: Useful functions for manipulating the data and labels.
85
+
86
+ ### Supported Tasks
87
+
88
+ - `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera.
89
+ - `semantic-segmentation`: Semantic segmentation is the task of associating every pixel of an image to a class label.
90
+
91
+ There are other tasks supported by this dataset as well. You can find more about them by referring to [this resource](https://paperswithcode.com/dataset/nyuv2).
92
+
93
+
94
+ ### Languages
95
+
96
+ English.
97
+
98
+ ## Dataset Structure
99
+
100
+ ### Data Instances
101
+
102
+ A data point comprises an image and its annotation depth map for both the `train` and `validation` splits.
103
+
104
+ ```
105
+ {
106
+ 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB at 0x1FF32A3EDA0>,
107
+ 'depth_map': <PIL.PngImagePlugin.PngImageFile image mode=L at 0x1FF32E5B978>,
108
+ }
109
+ ```
110
+
111
+ ### Data Fields
112
+
113
+ - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
114
+ - `depth_map`: A `PIL.Image.Image` object containing the annotation depth map.
115
+
116
+ ### Data Splits
117
+
118
+ The data is split into training, and validation splits. The training data contains 47584 images, and the validation data contains 654 images.
119
+
120
+ ## Visualization
121
+
122
+ You can use the following code snippet to visualize samples from the dataset:
123
+
124
+ ```py
125
+ from datasets import load_dataset
126
+ import numpy as np
127
+ import matplotlib.pyplot as plt
128
+
129
+
130
+ cmap = plt.cm.viridis
131
+
132
+ ds = load_dataset("sayakpaul/nyu_depth_v2")
133
+
134
+
135
+ def colored_depthmap(depth, d_min=None, d_max=None):
136
+ if d_min is None:
137
+ d_min = np.min(depth)
138
+ if d_max is None:
139
+ d_max = np.max(depth)
140
+ depth_relative = (depth - d_min) / (d_max - d_min)
141
+ return 255 * cmap(depth_relative)[:,:,:3] # H, W, C
142
+
143
+
144
+ def merge_into_row(input, depth_target):
145
+ input = np.array(input)
146
+ depth_target = np.squeeze(np.array(depth_target))
147
+
148
+ d_min = np.min(depth_target)
149
+ d_max = np.max(depth_target)
150
+ depth_target_col = colored_depthmap(depth_target, d_min, d_max)
151
+ img_merge = np.hstack([input, depth_target_col])
152
+
153
+ return img_merge
154
+
155
+
156
+ random_indices = np.random.choice(len(ds["train"]), 9).tolist()
157
+ train_set = ds["train"]
158
+
159
+ plt.figure(figsize=(15, 6))
160
+
161
+ for i, idx in enumerate(random_indices):
162
+ ax = plt.subplot(3, 3, i + 1)
163
+ image_viz = merge_into_row(
164
+ train_set[idx]["image"], train_set[idx]["depth_map"]
165
+ )
166
+ plt.imshow(image_viz.astype("uint8"))
167
+ plt.axis("off")
168
+ ```
169
+
170
+ ## Dataset Creation
171
+
172
+ ### Curation Rationale
173
+
174
+ The rationale from [the paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) that introduced the NYU Depth V2 dataset:
175
+
176
+ > We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation.
177
+
178
+ ### Source Data
179
+
180
+ #### Initial Data Collection
181
+
182
+ > The dataset consists of 1449 RGBD images, gathered from a wide range
183
+ of commercial and residential buildings in three different US cities, comprising
184
+ 464 different indoor scenes across 26 scene classes.A dense per-pixel labeling was
185
+ obtained for each image using Amazon Mechanical Turk.
186
+
187
+ ### Annotations
188
+
189
+ #### Annotation process
190
+
191
+ This is an involved process. Interested readers are referred to Sections 2, 3, and 4 of the [original paper](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf).
192
+
193
+ #### Who are the annotators?
194
+
195
+ AMT annotators.
196
+
197
+ ### Personal and Sensitive Information
198
+
199
+ [More Information Needed]
200
+
201
+ ## Considerations for Using the Data
202
+
203
+ ### Social Impact of Dataset
204
+
205
+ [More Information Needed]
206
+
207
+ ### Discussion of Biases
208
+
209
+ [More Information Needed]
210
+
211
+ ### Other Known Limitations
212
+
213
+ [More Information Needed]
214
+
215
+ ## Additional Information
216
+
217
+ ### Dataset Curators
218
+
219
+ * Original NYU Depth V2 dataset: Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus
220
+ * Preprocessed version: Diana Wofk, Fangchang Ma, Tien-Ju Yang, Sertac Karaman, Vivienne Sze
221
+
222
+ ### Licensing Information
223
+
224
+ The preprocessed NYU Depth V2 dataset is licensed under a [MIT License](https://github.com/dwofk/fast-depth/blob/master/LICENSE).
225
+
226
+ ### Citation Information
227
+
228
+ ```bibtex
229
+ @inproceedings{Silberman:ECCV12,
230
+ author = {Nathan Silberman, Derek Hoiem, Pushmeet Kohli and Rob Fergus},
231
+ title = {Indoor Segmentation and Support Inference from RGBD Images},
232
+ booktitle = {ECCV},
233
+ year = {2012}
234
+ }
235
+
236
+ @inproceedings{icra_2019_fastdepth,
237
+ author = {{Wofk, Diana and Ma, Fangchang and Yang, Tien-Ju and Karaman, Sertac and Sze, Vivienne}},
238
+ title = {{FastDepth: Fast Monocular Depth Estimation on Embedded Systems}},
239
+ booktitle = {{IEEE International Conference on Robotics and Automation (ICRA)}},
240
+ year = {{2019}}
241
+ }
242
+ ```
243
+
244
+ ### Contributions
245
+
246
+ Thanks to [@sayakpaul](https://huggingface.co/sayakpaul) for adding this dataset.