Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
sayakpaul HF staff commited on
Commit
03b1022
1 Parent(s): 5e193c3

add: dataset card (partial).

Browse files
Files changed (1) hide show
  1. README.md +71 -1
README.md CHANGED
@@ -1,5 +1,19 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  dataset_info:
4
  features:
5
  - name: image
@@ -15,4 +29,60 @@ dataset_info:
15
  num_examples: 654
16
  download_size: 35151124480
17
  dataset_size: 20452883313
18
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ multilinguality:
6
+ - monolingual
7
+ size_categories:
8
+ - 10K<n<100K
9
+ task_categories:
10
+ - depth-estimation
11
+ task_ids:
12
+ - depth-estimation
13
+ pretty_name: NYU Depth V2
14
+ tags:
15
+ - depth-estimation
16
+ paperswithcode_id: nyuv2
17
  dataset_info:
18
  features:
19
  - name: image
 
29
  num_examples: 654
30
  download_size: 35151124480
31
  dataset_size: 20452883313
32
+ ---
33
+
34
+ # Dataset Card for MIT Scene Parsing Benchmark
35
+
36
+ ## Table of Contents
37
+ - [Table of Contents](#table-of-contents)
38
+ - [Dataset Description](#dataset-description)
39
+ - [Dataset Summary](#dataset-summary)
40
+ - [Supported Tasks](#supported-tasks)
41
+ - [Languages](#languages)
42
+ - [Dataset Structure](#dataset-structure)
43
+ - [Data Instances](#data-instances)
44
+ - [Data Fields](#data-fields)
45
+ - [Data Splits](#data-splits)
46
+ - [Dataset Creation](#dataset-creation)
47
+ - [Curation Rationale](#curation-rationale)
48
+ - [Source Data](#source-data)
49
+ - [Annotations](#annotations)
50
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
51
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
52
+ - [Social Impact of Dataset](#social-impact-of-dataset)
53
+ - [Discussion of Biases](#discussion-of-biases)
54
+ - [Other Known Limitations](#other-known-limitations)
55
+ - [Additional Information](#additional-information)
56
+ - [Dataset Curators](#dataset-curators)
57
+ - [Licensing Information](#licensing-information)
58
+ - [Citation Information](#citation-information)
59
+ - [Contributions](#contributions)
60
+
61
+
62
+ ## Dataset Description
63
+
64
+ - **Homepage:** [NYU Depth Dataset V2 homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)
65
+ - **Repository:** Fast Depth [repository](https://github.com/dwofk/fast-depth) which was used to source the dataset in this repository. It is a preprocessed version of the original NYU Depth V2 dataset linked above. It is also used in [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/nyu_depth_v2).
66
+ - **Paper:** [Indoor Segmentation and Support Inference from RGBD Images](http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf) and [FastDepth: Fast Monocular Depth Estimation on Embedded Systems](https://arxiv.org/abs/1903.03273)
67
+ - **Point of Contact:** [Nathan Silberman](mailto:silberman@@cs.nyu.edu) and [Diana Wofk](mailto:dwofk@alum.mit.edu)
68
+
69
+ ### Dataset Summary
70
+
71
+ As per the [dataset homepage](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html):
72
+
73
+ The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft [Kinect](http://www.xbox.com/kinect). It features:
74
+
75
+ * 1449 densely labeled pairs of aligned RGB and depth images
76
+ * 464 new scenes taken from 3 cities
77
+ * 407,024 new unlabeled frames
78
+ * Each object is labeled with a class and an instance number (cup1, cup2, cup3, etc)
79
+
80
+ The dataset has several components:
81
+
82
+ * Labeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels.
83
+ * Raw: The raw rgb, depth and accelerometer data as provided by the Kinect.
84
+ * Toolbox: Useful functions for manipulating the data and labels.
85
+
86
+ ### Supported Tasks
87
+
88
+ - `depth-estimation`: Depth estimation is the task of approximating the perceived depth of a given image. In other words, it's about measuring the distance of each image pixel from the camera.