ootts commited on
Commit
420b4d2
1 Parent(s): f7e1011

dataset card

Browse files
Files changed (1) hide show
  1. README.md +63 -46
README.md CHANGED
@@ -1,102 +1,119 @@
1
  ---
2
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
3
- # Doc / guide: https://huggingface.co/docs/hub/datasets-cards
4
- {}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
- # Dataset Card for Dataset Name
8
 
9
  ## Dataset Description
10
 
11
- - **Homepage:**
12
- - **Repository:**
13
- - **Paper:**
14
- - **Leaderboard:**
15
- - **Point of Contact:**
16
 
17
  ### Dataset Summary
18
 
19
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
 
20
 
21
  ### Supported Tasks and Leaderboards
22
 
23
- [More Information Needed]
 
24
 
25
  ### Languages
26
 
27
- [More Information Needed]
28
 
29
  ## Dataset Structure
30
 
31
- ### Data Instances
32
-
33
- [More Information Needed]
34
-
35
  ### Data Fields
36
 
37
- [More Information Needed]
 
 
 
 
 
 
 
38
 
39
  ### Data Splits
40
 
41
- [More Information Needed]
42
 
43
  ## Dataset Creation
44
 
45
  ### Curation Rationale
46
 
47
- [More Information Needed]
 
 
 
 
 
48
 
49
  ### Source Data
50
 
51
  #### Initial Data Collection and Normalization
52
 
53
- [More Information Needed]
54
 
55
- #### Who are the source language producers?
56
-
57
- [More Information Needed]
58
 
59
  ### Annotations
60
 
61
  #### Annotation process
62
 
63
- [More Information Needed]
64
-
65
- #### Who are the annotators?
66
-
67
- [More Information Needed]
68
 
69
- ### Personal and Sensitive Information
 
 
 
70
 
71
- [More Information Needed]
72
-
73
- ## Considerations for Using the Data
74
-
75
- ### Social Impact of Dataset
76
-
77
- [More Information Needed]
78
-
79
- ### Discussion of Biases
80
-
81
- [More Information Needed]
82
-
83
- ### Other Known Limitations
84
 
85
- [More Information Needed]
86
 
87
  ## Additional Information
88
 
89
  ### Dataset Curators
90
 
91
- [More Information Needed]
92
 
93
  ### Licensing Information
94
 
95
- [More Information Needed]
96
 
97
  ### Citation Information
98
 
99
- [More Information Needed]
 
 
 
 
 
 
100
 
101
  ### Contributions
102
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: MIT?????
5
+ tags:
6
+ - {novel view synthesis} # Example: audio
7
+ - {inverse rendering} # Example: bio
8
+ - {material decomposition} # Example: natural-language-understanding
9
+ annotations_creators:
10
+ - {expert-generated} # Example: crowdsourced, found, expert-generated, machine-generated
11
+ pretty_name: {OpenIllumination} # Example: SQuAD
12
+ size_categories:
13
+ - {100K<n<1M} # Example: n<1K, 100K<n<1M, …
14
+ task_categories: # Full list at https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts
15
+ - {other}
16
+ download_size: 900G???
17
+ #paperswithcode_id: {paperswithcode_id} # Dataset id on PapersWithCode (from the URL). #Example for SQuAD: squad
18
+ #configs: # Optional for datasets with multiple configurations like glue.
19
+ #- {config_0} # Example for glue: sst2
20
+ #- {config_1} # Example for glue: cola
21
  ---
22
 
23
+ # Dataset Card for OpenIllumination
24
 
25
  ## Dataset Description
26
 
27
+ - **Homepage:** https://annonymous2023neuripsdataset.github.io/
28
+ - **Repository:** N/A for now.
29
+ - **Paper:** N/A for now.
30
+ - **Leaderboard:** N/A for now.
31
+ - **Point of Contact:** lal005@ucsd.edu, lic032@ucsd.edu, haosu@ucsd.edu
32
 
33
  ### Dataset Summary
34
 
35
+ Our dataset comprises 64 objects, each captured from 70 views, under 13 lighting patterns and 142 One-Light-At-Time (OLAT) illumination, respectively.
36
+ The 70 views are captured by 48 DSLR cameras and 22 high-speed cameras.
37
 
38
  ### Supported Tasks and Leaderboards
39
 
40
+ * Novel view synthesis: The dataset can be used to evaluate NVS methods, such as NeRF, TensoRF, and NeuS.
41
+ * Inverse rendering: The dataset can be used to evaluate inverse rendering algorithms, which is to decompose illumination, object geometry, and object materials.
42
 
43
  ### Languages
44
 
45
+ English
46
 
47
  ## Dataset Structure
48
 
 
 
 
 
49
  ### Data Fields
50
 
51
+ For each image, the following fields are provided:
52
+
53
+ * file_path: str, the file path to an image.
54
+ * light_idx: int, the index of illuminations, from 1 to 13 for lighting patterns, or from 0 to 141 for OLAT.
55
+ * transform_matrix: list, a 4x4 matrix, representing the camera pose for this image (in OpenCV convention).
56
+ * camera_angle_x: float, can be used to compute the corresponding camera intrinsics.
57
+ * obj_mask: the object mask, can be read by ```imageio.imread(OBJ_MASK_PATH)>0```, used for PSNR evaluation.
58
+ * com_mask (optional): the union of the object mask and the support mask, can be read by ```imageio.imread(COM_MASK_PATH)>0```, used for t
59
 
60
  ### Data Splits
61
 
62
+ The data is split into training and testing views. For each object captured under 13 lighting patterns, the training set and the testing set contain 38 and 10 views, respectively. For each object captured under OLAT, the training set and the testing set contain 17 and 5 views, respectively.
63
 
64
  ## Dataset Creation
65
 
66
  ### Curation Rationale
67
 
68
+ From the paper:
69
+
70
+ > Recent efforts have introduced some datasets that incorporate multiple illuminations in real-world settings. However, most of them are limited either in the number of views or the number of illuminations; few of them provide object-level data as well. Consequently, these existing datasets prove unsuitable for evaluating inverse rendering methods on real-world objects.
71
+ >
72
+ >
73
+ > To address this, we present a new dataset containing objects with a variety of materials, captured under multiple views and illuminations, allowing for reliable evaluation of various inverse rendering tasks with real data.
74
 
75
  ### Source Data
76
 
77
  #### Initial Data Collection and Normalization
78
 
79
+ From the paper:
80
 
81
+ > Our dataset was acquired using a setup similar to a traditional light stage, where densely distributed cameras and controllable lights are attached to a static frame around a central platform.
 
 
82
 
83
  ### Annotations
84
 
85
  #### Annotation process
86
 
87
+ From the paper:
 
 
 
 
88
 
89
+ > To obtain high-quality segmentation masks, we propose to use Segment-Anything (SAM) to perform instance segmentation. However,
90
+ > we find that the performance is not satisfactory. One reason is that the object categories are highly undefined. In this case, even combining the bounding box and point prompts cannot produce satisfactory results. To address this problem, we propose to use multiple bounding-box prompts to perform segmentation for each possible part and then calculate a union of the masks as the final object mask.
91
+ >
92
+ > For objects with very detailed and thin structures, e.g. hair, we use an off-the-shelf background matting method to perform object segmentation.
93
 
94
+ #### Who are the annotators?
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
+ Linghao Chen, Isabella Liu, and Ziyang Fu.
97
 
98
  ## Additional Information
99
 
100
  ### Dataset Curators
101
 
102
+ Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li, Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu and Hao Su
103
 
104
  ### Licensing Information
105
 
106
+ Non-commercial use only????????
107
 
108
  ### Citation Information
109
 
110
+ ```bash
111
+ @article{liu2023openillumination,
112
+ title={OpenIllumination: A Multi-Illumination Dateset for Inverse Rendering Evaluation on Real Objects},
113
+ author={Liu, Isabella and Chen, Linghao and Fu, Ziyang and Wu, Liwen and Jin, Haian and Li, Zhong and Chin Ming Ryan Wong3 and Xu, Yi and Ravi Ramamoorthi1 and Xu, Zexiang and Su, Hao},
114
+ year={2023}
115
+ }
116
+ ```
117
 
118
  ### Contributions
119