jamarks commited on
Commit
e850d7c
1 Parent(s): 11a1a5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -99
README.md CHANGED
@@ -50,7 +50,7 @@ dataset_summary: '
50
 
51
  # Note: other available arguments include ''max_samples'', etc
52
 
53
- dataset = fouh.load_from_hub("jamarks/Describable-Textures-Dataset")
54
 
55
 
56
  # Launch the App
@@ -90,7 +90,7 @@ import fiftyone.utils.huggingface as fouh
90
 
91
  # Load the dataset
92
  # Note: other available arguments include 'max_samples', etc
93
- dataset = fouh.load_from_hub("jamarks/Describable-Textures-Dataset")
94
 
95
  # Launch the App
96
  session = fo.launch_app(dataset)
@@ -101,130 +101,65 @@ session = fo.launch_app(dataset)
101
 
102
  ### Dataset Description
103
 
104
- <!-- Provide a longer summary of what this dataset is. -->
105
 
 
106
 
 
107
 
108
- - **Curated by:** [More Information Needed]
109
- - **Funded by [optional]:** [More Information Needed]
110
- - **Shared by [optional]:** [More Information Needed]
111
- - **Language(s) (NLP):** en
112
- - **License:** other
113
-
114
- ### Dataset Sources [optional]
115
-
116
- <!-- Provide the basic links for the dataset. -->
117
-
118
- - **Repository:** [More Information Needed]
119
- - **Paper [optional]:** [More Information Needed]
120
- - **Demo [optional]:** [More Information Needed]
121
 
122
- ## Uses
123
 
124
- <!-- Address questions around how the dataset is intended to be used. -->
125
 
126
- ### Direct Use
127
-
128
- <!-- This section describes suitable use cases for the dataset. -->
129
-
130
- [More Information Needed]
131
-
132
- ### Out-of-Scope Use
133
-
134
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
135
 
136
- [More Information Needed]
137
 
138
- ## Dataset Structure
139
 
140
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
141
 
142
- [More Information Needed]
143
 
144
  ## Dataset Creation
145
 
146
  ### Curation Rationale
147
 
148
- <!-- Motivation for the creation of this dataset. -->
149
-
150
- [More Information Needed]
 
 
151
 
152
  ### Source Data
153
 
154
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
155
-
156
- #### Data Collection and Processing
157
-
158
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
159
-
160
- [More Information Needed]
161
-
162
- #### Who are the source data producers?
163
-
164
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
165
-
166
- [More Information Needed]
167
-
168
- ### Annotations [optional]
169
-
170
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
171
 
172
- #### Annotation process
173
 
174
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
175
-
176
- [More Information Needed]
177
-
178
- #### Who are the annotators?
179
-
180
- <!-- This section describes the people or systems who created the annotations. -->
181
-
182
- [More Information Needed]
183
-
184
- #### Personal and Sensitive Information
185
-
186
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
187
-
188
- [More Information Needed]
189
-
190
- ## Bias, Risks, and Limitations
191
-
192
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
193
-
194
- [More Information Needed]
195
-
196
- ### Recommendations
197
-
198
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
199
-
200
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
201
-
202
- ## Citation [optional]
203
 
204
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
205
 
206
  **BibTeX:**
207
 
208
- [More Information Needed]
209
-
210
- **APA:**
211
-
212
- [More Information Needed]
213
-
214
- ## Glossary [optional]
215
-
216
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
217
-
218
- [More Information Needed]
219
-
220
- ## More Information [optional]
221
 
222
- [More Information Needed]
223
 
224
- ## Dataset Card Authors [optional]
225
 
226
- [More Information Needed]
227
 
228
- ## Dataset Card Contact
229
 
230
- [More Information Needed]
 
50
 
51
  # Note: other available arguments include ''max_samples'', etc
52
 
53
+ dataset = fouh.load_from_hub("Voxel51/Describable-Textures-Dataset")
54
 
55
 
56
  # Launch the App
 
90
 
91
  # Load the dataset
92
  # Note: other available arguments include 'max_samples', etc
93
+ dataset = fouh.load_from_hub("Voxel51/Describable-Textures-Dataset")
94
 
95
  # Launch the App
96
  session = fo.launch_app(dataset)
 
101
 
102
  ### Dataset Description
103
 
104
+ "Our ability of vividly describing the content of images is a clear demonstration of the power of human visual system. Not only we can recognise objects in images (e.g. a cat, a person, or a car), but we can also describe them to the most minute details, extracting an impressive amount of information at a glance. But visual perception is not limited to the recognition and description of objects. Prior to high-level semantic understanding, most textural patterns elicit a rich array of visual impressions. We could describe a texture as "polka dotted, regular, sparse, with blue dots on a white background"; or as "noisy, line-like, and irregular".
105
 
106
+ Our aim is to reproduce this capability in machines. Scientifically, the aim is to gain further insight in how textural information may be processed, analysed, and represented by an intelligent system. Compared to classic task of textural analysis such as material recognition, such perceptual properties are much richer in variety and structure, inviting new technical challenges.
107
 
108
+ DTD is a texture database, consisting of 5640 images, organized according to a list of 47 terms (categories) inspired from human perception. There are 120 images for each category. Image sizes range between 300x300 and 640x640, and the images contain at least 90% of the surface representing the category attribute. The images were collected from Google and Flickr by entering our proposed attributes and related terms as search queries. The images were annotated using Amazon Mechanical Turk in several iterations. For each image we provide key attribute (main category) and a list of joint attributes.
109
 
110
+ The data is split in three equal parts, in train, validation and test, 40 images per class, for each split. We provide the ground truth annotation for both key and joint attributes, as well as the 10 splits of the data we used for evaluation."
 
 
 
 
 
 
 
 
 
 
 
 
111
 
 
112
 
 
113
 
114
+ - **Curated by:** M.Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, A. Vedaldi,
115
+ - **Funded by:** NSF Grant #1005411, JHU-HLTCOE, Google Research, ERC grant VisRec no. 228180, ANR-10-JCJC-0205
116
+ - **Language(s) (NLP):** en
117
+ - **License:** other
 
 
 
 
 
118
 
119
+ ### Dataset Sources
120
 
121
+ <!-- Provide the basic links for the dataset. -->
122
 
123
+ - **Homepage:** https://www.robots.ox.ac.uk/~vgg/data/dtd/
124
+ - **Paper:** https://www.robots.ox.ac.uk/~vgg/publications/2014/Cimpoi14/cimpoi14.pdf
125
+ - **Demo:** https://try.fiftyone.ai/datasets/describable-textures-dataset/samples
126
 
 
127
 
128
  ## Dataset Creation
129
 
130
  ### Curation Rationale
131
 
132
+ 'Patterns and textures are key characteristics of many natural objects: a shirt can be striped, the wings of a butterfly can be veined, and the skin of an animal can be scaly.
133
+ Aiming at supporting this dimension in image understanding, we address the problem of describing textures with semantic attributes. We identify a vocabulary of forty-seven
134
+ texture terms and use them to describe a large dataset of
135
+ patterns collected “in the wild”. The resulting Describable
136
+ Textures Dataset (DTD) is a basis to seek the best representation for recognizing describable texture attributes in images. ' - dataset authors
137
 
138
  ### Source Data
139
 
140
+ Google and Flickr
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141
 
 
142
 
143
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
 
145
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
146
 
147
  **BibTeX:**
148
 
149
+ ```bibtex
150
+ @InProceedings{cimpoi14describing,
151
+ Author = {M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi},
152
+ Title = {Describing Textures in the Wild},
153
+ Booktitle = {Proceedings of the {IEEE} Conf. on Computer Vision and Pattern Recognition ({CVPR})},
154
+ Year = {2014}}
155
+ ```
 
 
 
 
 
 
156
 
157
+ ## More Information
158
 
159
+ This research is based on work done at the 2012 CLSP Summer Workshop, and was partially supported by NSF Grant #1005411, ODNI via the JHU-HLTCOE and Google Research. Mircea Cimpoi was supported by the ERC grant VisRec no. 228180 and Iasonas Kokkinos by ANR-10-JCJC-0205.
160
 
161
+ The development of the describable textures dataset started in June and July 2012 at the Johns Hopkins Centre for Language and Speech Processing (CLSP) Summer Workshop. The authors are most grateful to Prof. Sanjeev Khudanpur and Prof. Greg Hager.
162
 
163
+ ## Dataset Card Authors
164
 
165
+ [Jacob Marks](https://huggingface.co/jamarks)