Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
dnaveenr commited on
Commit
091fb6e
1 Parent(s): efd9ab8

Add MetaShift dataset (#3900)

Browse files

* Initial draft for the MetaShift Dataset.

* add dataset preprocessing and yield images.

* use selected classes.

* format code as required.

* add selected_classes as a config parameter.

* update fields in Dataset Card.

* add dataset tagset.

* Update datasets/metashift/README.md

Rename card name.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Naming for links and add point of contact info.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Fix extra whitespace.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Extra full stop removed.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Add bibtex tag.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/metashift.py

Cleaner code changes.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/metashift.py

Use os.path.join instead.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/metashift.py

Use staticmethod, remove print statements.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/metashift.py

Add task template.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/metashift.py

add static method.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* add annotation info.

* use multi-line comment.

* add minor fixes.

* add the generated meta-graphs to the card as images.

* usage of os.path.join for src_image_path.

* add config to generate metashift-attributes dataset.

* add config to expose image metadata.

* add contants as config parameters.

* add Dataset Usage section to cards.

* add name, dataset version to MetashiftConfig.

* add dataset_infos.json

* pass URLs to images and add alt tags.

* set default classes as in original repo.

* add dummy data.

* format code.

* update dataset structure section for config options.

* Update datasets/metashift/README.md

CI fixes.

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/metashift/README.md

Correct task categories.

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* Update datasets/metashift/metashift.py

Add encoding.

Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

* add contributions section

* Update datasets/metashift/README.md

Add paperswithcode id.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Correct sentence.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

add default classes info.

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* Update datasets/metashift/README.md

Co-authored-by: Mario Šaško <mario@huggingface.co>

* indent params list and update with suggestions.

* Apply suggestions from code review

* Update datasets/metashift/metashift.py

Co-authored-by: Mario Šaško <mario@huggingface.co>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/92da6d5effacb40270c8b7ebd03e736de1106062

README.md ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: MetaShift
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - image-classification
19
+ - other
20
+ task_ids:
21
+ - multi-label-image-classification
22
+ - other-other-domain-generalization
23
+ paperswithcode_id: metashift
24
+ ---
25
+
26
+ # Dataset Card for MetaShift
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:** [MetaShift homepage](https://metashift.readthedocs.io/)
54
+ - **Repository:** [MetaShift repository](https://github.com/Weixin-Liang/MetaShift)
55
+ - **Paper:** [MetaShift paper](https://arxiv.org/abs/2202.06523v1)
56
+ - **Point of Contact:** [Weixin Liang](mailto:wxliang@stanford.edu)
57
+
58
+ ### Dataset Summary
59
+
60
+ The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes. It was created for understanding the performance of a machine learning model across diverse data distributions.
61
+
62
+ The authors leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift.
63
+ The key idea is to cluster images using its metadata which provides context for each image.
64
+ For example : cats with cars or cats in bathroom.
65
+ The main advantage is the dataset contains many more coherent sets of data compared to other benchmarks.
66
+
67
+ Two important benefits of MetaShift :
68
+ - Contains orders of magnitude more natural data shifts than previously available.
69
+ - Provides explicit explanations of what is unique about each of its data sets and a distance score that measures the amount of distribution shift between any two of its data sets.
70
+
71
+ ### Dataset Usage
72
+
73
+ The dataset has the following configuration parameters:
74
+ - selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for. If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
75
+ - attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset. Refer [MetaShift-Attributes Dataset](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) for more information.
76
+ - attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset. If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`. You can find the full attribute ontology in the above link.
77
+ - with_image_metadata: `bool`, default `False`, whether to include image metadata. If set to `True`, this will give additional metadata about each image. See [Scene Graph](https://cs.stanford.edu/people/dorarad/gqa/download.html) for more information.
78
+ - image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset. If the number of images is less than this threshold, the subset is ignored.
79
+ - min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
80
+
81
+ Consider the following examples to get an idea of how you can use the configuration parameters :
82
+
83
+ 1. To generate the MetaShift Dataset :
84
+ ```python
85
+ load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'])
86
+ ```
87
+ The full object vocabulary and its hierarchy can be seen [here](https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json).
88
+
89
+ The default classes are `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`
90
+
91
+ 2. To generate the MetaShift-Attributes Dataset (subsets defined by subject attributes) :
92
+ ```python
93
+ load_dataset("metashift", attributes_dataset = True, attributes=["dog(smiling)", "cat(resting)"])
94
+ ```
95
+
96
+ The default attributes are `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`
97
+
98
+ 3. To generate the dataset with additional image metadata information :
99
+ ```python
100
+ load_dataset("metashift", selected_classes=['cat', 'dog', 'bus'], with_image_metadata=True)
101
+ ```
102
+ 4. Further, you can specify your own configuration different from those used in the papers as follows:
103
+ ```python
104
+ load_dataset("metashift", image_subset_size_threshold=20, min_local_groups=3)
105
+ ```
106
+ ### Dataset Meta-Graphs
107
+
108
+ From the MetaShift Github Repo :
109
+ > MetaShift splits the data points of each class (e.g., Cat) into many subsets based on visual contexts. Each node in the meta-graph represents one subset. The weight of each edge is the overlap coefficient between the corresponding two subsets. Node colors indicate the graph-based community detection results. Inter-community edges are colored. Intra-community edges are grayed out for better visualization. The border color of each example image indicates its community in the meta-graph. We have one such meta-graph for each of the 410 classes in the MetaShift.
110
+
111
+ The following are the metagraphs for the default classes, these have been generated using the `generate_full_MetaShift.py` file.
112
+
113
+ <p align='center'>
114
+ <img width='75%' src='https://i.imgur.com/wrpezCK.jpg' alt="Cat Meta-graph" /> </br>
115
+ <b>Figure: Meta-graph: visualizing the diverse data distributions within the “cat” class. </b>
116
+ </p>
117
+
118
+ <p align='center'>
119
+ <img width='75%' src='https://i.imgur.com/FhuAwfT.jpg' alt="Dog Meta-graph" /> </br>
120
+ <b>Figure: Meta-graph for the “Dog” class, which captures meaningful semantics of the multi-modal data distribution of “Dog”. </b>
121
+ </p>
122
+
123
+ <p align='center'>
124
+ <img width='75%' src='https://i.imgur.com/FFCcN6L.jpg' alt="Bus Meta-graph" /> </br>
125
+ <b>Figure: Meta-graph for the “Bus” class. </b>
126
+ </p>
127
+
128
+ <p align='center'>
129
+ <img width='75%' src='https://i.imgur.com/rx5b5Vo.jpg' alt="Elephant Meta-graph" /> </br>
130
+ <b>Figure: Meta-graph for the "Elephant" class. </b>
131
+ </p>
132
+
133
+ <p align='center'>
134
+ <img width='75%' src='https://i.imgur.com/6f6U3S8.jpg' alt="Horse Meta-graph" /> </br>
135
+ <b>Figure: Meta-graph for the "Horse" class. </b>
136
+ </p>
137
+
138
+ <p align='center'>
139
+ <img width='75%' src='https://i.imgur.com/x9zhQD7.jpg' alt="Truck Meta-graph"/> </br>
140
+ <b>Figure: Meta-graph for the Truck class. </b>
141
+ </p>
142
+
143
+ ### Supported Tasks and Leaderboards
144
+
145
+ From the paper:
146
+ > MetaShift supports evaluation on both :
147
+ > - domain generalization and subpopulation shifts settings,
148
+ > - assessing training conflicts.
149
+
150
+ ### Languages
151
+
152
+ All the classes and subsets use English as their primary language.
153
+
154
+ ## Dataset Structure
155
+
156
+ ### Data Instances
157
+
158
+ A sample from the MetaShift dataset is provided below:
159
+
160
+ ```
161
+ {
162
+ 'image_id': '2411520',
163
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7F99115B8D90>,
164
+ 'label': 2,
165
+ 'context': 'fence'
166
+ }
167
+ ```
168
+
169
+ A sample from the MetaShift-Attributes dataset is provided below:
170
+ ```
171
+ {
172
+ 'image_id': '2401643',
173
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FED371CE350>
174
+ 'label': 0
175
+ }
176
+ ```
177
+
178
+ The format of the dataset with image metadata included by passing `with_image_metadata=True` to `load_dataset` is provided below:
179
+ ```
180
+ {
181
+ 'image_id': '2365745',
182
+ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x333 at 0x7FEBCD39E4D0>
183
+ 'label': 0,
184
+ 'context': 'ground',
185
+ 'width': 500,
186
+ 'height': 333,
187
+ 'location': None,
188
+ 'weather': None,
189
+ 'objects':
190
+ {
191
+ 'object_id': ['2676428', '3215330', '1962110', '2615742', '3246028', '3232887', '3215329', '1889633', '3882667', '3882663', '1935409', '3882668', '3882669'],
192
+ 'name': ['wall', 'trailer', 'floor', 'building', 'walkway', 'head', 'tire', 'ground', 'dock', 'paint', 'tail', 'cat', 'wall'],
193
+ 'x': [194, 12, 0, 5, 3, 404, 27, 438, 2, 142, 324, 328, 224],
194
+ 'y': [1, 7, 93, 10, 100, 46, 215, 139, 90, 172, 157, 45, 246],
195
+ 'w': [305, 477, 499, 492, 468, 52, 283, 30, 487, 352, 50, 122, 274],
196
+ 'h': [150, 310, 72, 112, 53, 59, 117, 23, 240, 72, 107, 214, 85],
197
+ 'attributes': [['wood', 'green'], [], ['broken', 'wood'], [], [], [], ['black'], [], [], [], ['thick'], ['small'], ['blue']],
198
+ 'relations': [{'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['to the left of'], 'object': ['3882669']}, {'name': ['to the right of'], 'object': ['3882668']}, {'name': [], 'object': []}, {'name': [], 'object': []}, {'name': ['of'], 'object': ['3882668']}, {'name': ['perched on', 'to the left of'], 'object': ['3882667', '1889633']}, {'name': ['to the right of'], 'object': ['3215329']}]
199
+ }
200
+ }
201
+ ```
202
+
203
+ ### Data Fields
204
+
205
+ - `image_id`: Unique numeric ID of the image in Base Visual Genome dataset.
206
+ - `image`: A PIL.Image.Image object containing the image.
207
+ - `label`: an int classification label.
208
+ - `context`: represents the context in which the label is seen. A given label could have multiple contexts.
209
+
210
+ Image Metadata format can be seen [here](https://cs.stanford.edu/people/dorarad/gqa/download.html) and a sample above has been provided for reference.
211
+
212
+ ### Data Splits
213
+
214
+ All the data is contained in training set.
215
+
216
+ ## Dataset Creation
217
+
218
+ ### Curation Rationale
219
+
220
+ From the paper:
221
+ > We present MetaShift as an important resource for studying the behavior of
222
+ ML algorithms and training dynamics across data with heterogeneous contexts. In order to assess the reliability and fairness of a model, we need to evaluate
223
+ its performance and training behavior across heterogeneous types of data. MetaShift contains many more coherent sets of data compared to other benchmarks. Importantly, we have explicit annotations of what makes each subset unique (e.g. cats with cars or dogs next to a bench) as well as a score that measures the distance between any two subsets, which is not available in previous benchmarks of natural data.
224
+
225
+ ### Source Data
226
+
227
+ #### Initial Data Collection and Normalization
228
+
229
+ From the paper:
230
+ > We leverage the natural heterogeneity of Visual Genome and its annotations to construct MetaShift. Visual Genome contains over 100k images across 1,702 object classes. MetaShift is constructed on a class-by-class basis. For each class, say “cat”, we pull out all cat images and proceed with generating candidate subests, constructing meta-graphs and then duantify distances of distribution shifts.
231
+
232
+ #### Who are the source language producers?
233
+
234
+ [More Information Needed]
235
+
236
+ ### Annotations
237
+
238
+ The MetaShift dataset uses Visual Genome as its base, therefore the annotations process is same as the Visual Genome dataset.
239
+
240
+ #### Annotation process
241
+
242
+ From the Visual Genome paper :
243
+ > We used Amazon Mechanical Turk (AMT) as our primary source of annotations. Overall, a total of over 33,000 unique workers contributed to the dataset. The dataset was collected over the course of 6 months after 15 months of experimentation and iteration on the data representation. Approximately 800, 000 Human Intelligence Tasks (HITs) were launched on AMT, where each HIT involved creating descriptions, questions and answers, or region graphs.
244
+
245
+ #### Who are the annotators?
246
+
247
+ From the Visual Genome paper :
248
+ > Visual Genome was collected and verified entirely by crowd workers from Amazon Mechanical Turk.
249
+
250
+ ### Personal and Sensitive Information
251
+
252
+ [More Information Needed]
253
+
254
+ ## Considerations for Using the Data
255
+
256
+ ### Social Impact of Dataset
257
+
258
+ [More Information Needed]
259
+
260
+ ### Discussion of Biases
261
+
262
+ From the paper:
263
+ > One limitation is that our MetaShift might inherit existing biases in Visual Genome, which is the
264
+ base dataset of our MetaShift. Potential concerns include minority groups being under-represented
265
+ in certain classes (e.g., women with snowboard), or annotation bias where people in images are
266
+ by default labeled as male when gender is unlikely to be identifiable. Existing work in analyzing,
267
+ quantifying, and mitigating biases in general computer vision datasets can help with addressing this
268
+ potential negative societal impact.
269
+
270
+ ### Other Known Limitations
271
+
272
+ [More Information Needed]
273
+
274
+ ## Additional Information
275
+
276
+ ### Dataset Curators
277
+
278
+ [Needs More Information]
279
+
280
+ ### Licensing Information
281
+
282
+ From the paper :
283
+ > Our MetaShift and the code would use the Creative Commons Attribution 4.0 International License. Visual Genome (Krishna et al., 2017) is licensed under a Creative Commons Attribution 4.0 International License. MS-COCO (Lin et al., 2014) is licensed under CC-BY 4.0. The Visual Genome dataset uses 108, 077 images from the intersection of the YFCC100M (Thomee et al., 2016) and MS-COCO. We use the pre-processed and cleaned version of Visual Genome by GQA (Hudson & Manning, 2019).
284
+
285
+ ### Citation Information
286
+
287
+ ```bibtex
288
+ @InProceedings{liang2022metashift,
289
+ title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
290
+ author={Weixin Liang and James Zou},
291
+ booktitle={International Conference on Learning Representations},
292
+ year={2022},
293
+ url={https://openreview.net/forum?id=MTex8qKavoS}
294
+ }
295
+ ```
296
+
297
+ ### Contributions
298
+
299
+ Thanks to [@dnaveenr](https://github.com/dnaveenr) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"metashift": {"description": "The MetaShift is a dataset of datasets for evaluating distribution shifts and training conflicts.\nThe MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes.\nIt was created for understanding the performance of a machine learning model across diverse data distributions.\n", "citation": "@InProceedings{liang2022metashift,\ntitle={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},\nauthor={Weixin Liang and James Zou},\nbooktitle={International Conference on Learning Representations},\nyear={2022},\nurl={https://openreview.net/forum?id=MTex8qKavoS}\n}\n", "homepage": "https://metashift.readthedocs.io/", "license": "https://github.com/Weixin-Liang/MetaShift/blob/main/LICENSE", "features": {"image_id": {"dtype": "string", "id": null, "_type": "Value"}, "image": {"decode": true, "id": null, "_type": "Image"}, "label": {"num_classes": 8, "names": ["cat", "dog", "bus", "truck", "elephant", "horse", "bowl", "cup"], "id": null, "_type": "ClassLabel"}, "context": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "image", "output": "label"}, "task_templates": [{"task": "image-classification", "image_column": "image", "label_column": "label"}], "builder_name": "metashift", "config_name": "metashift", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 16333509, "num_examples": 86808, "dataset_name": "metashift"}}, "download_checksums": {"https://nlp.stanford.edu/data/gqa/images.zip": {"num_bytes": 21817965542, "checksum": "02ce5c49c793accd5305356de9c39a50f80a7aaac193b0203de30dbbc65bde62"}, "https://nlp.stanford.edu/data/gqa/sceneGraphs.zip": {"num_bytes": 44824862, "checksum": "59f6a3f6ac5227bac6cc615508e542dd546e309c7dbbdb666de05e42d7c51989"}, "https://github.com/Weixin-Liang/MetaShift/raw/main/dataset/meta_data/full-candidate-subsets.pkl": {"num_bytes": 15223270, "checksum": "aad22310fe65024573175216c7da18d4b69ab6da28afa10e9e5b1714650ee1e4"}}, "download_size": 21878013674, "post_processing_size": null, "dataset_size": 16333509, "size_in_bytes": 21894347183}}
dummy/metashift/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b66663b82a5584410dc037912a39ad2706d2ad0461c1489b97163eac45434b92
3
+ size 279065
metashift.py ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # _preprocess_groups(), _parse_node_str(), _load_candidate_subsets() adapted from here :
16
+ # https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/generate_full_MetaShift.py
17
+
18
+ # MIT License
19
+
20
+ # Copyright (c) 2021 Weixin-Liang
21
+
22
+ # Permission is hereby granted, free of charge, to any person obtaining a copy
23
+ # of this software and associated documentation files (the "Software"), to deal
24
+ # in the Software without restriction, including without limitation the rights
25
+ # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
26
+ # copies of the Software, and to permit persons to whom the Software is
27
+ # furnished to do so, subject to the following conditions:
28
+
29
+ # The above copyright notice and this permission notice shall be included in all
30
+ # copies or substantial portions of the Software.
31
+
32
+ # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
33
+ # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
34
+ # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
35
+ # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
36
+ # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
37
+ # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
38
+ # SOFTWARE.
39
+
40
+ """MetaShift Dataset."""
41
+
42
+ import json
43
+ import os
44
+ import pickle
45
+ from collections import Counter, defaultdict
46
+
47
+ import datasets
48
+ from datasets.tasks import ImageClassification
49
+
50
+
51
+ _CITATION = """\
52
+ @InProceedings{liang2022metashift,
53
+ title={MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts},
54
+ author={Weixin Liang and James Zou},
55
+ booktitle={International Conference on Learning Representations},
56
+ year={2022},
57
+ url={https://openreview.net/forum?id=MTex8qKavoS}
58
+ }
59
+ """
60
+
61
+
62
+ _DESCRIPTION = """\
63
+ The MetaShift is a dataset of datasets for evaluating distribution shifts and training conflicts.
64
+ The MetaShift dataset is a collection of 12,868 sets of natural images across 410 classes.
65
+ It was created for understanding the performance of a machine learning model across diverse data distributions.
66
+ """
67
+
68
+ _HOMEPAGE = "https://metashift.readthedocs.io/"
69
+
70
+
71
+ _LICENSE = "Creative Commons Attribution 4.0 International License"
72
+
73
+
74
+ _URLS = {
75
+ "image_files": "https://nlp.stanford.edu/data/gqa/images.zip",
76
+ "scene_graph_annotations": "https://nlp.stanford.edu/data/gqa/sceneGraphs.zip",
77
+ }
78
+
79
+ _METADATA_URLS = {
80
+ "full_candidate_subsets": "https://github.com/Weixin-Liang/MetaShift/raw/main/dataset/meta_data/full-candidate-subsets.pkl",
81
+ }
82
+
83
+ _ATTRIBUTES_URLS = {
84
+ "attributes_candidate_subsets": "https://github.com/Weixin-Liang/MetaShift/raw/main/dataset/attributes_MetaShift/attributes-candidate-subsets.pkl",
85
+ }
86
+
87
+
88
+ # See https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/meta_data/class_hierarchy.json
89
+ # for the full object vocabulary and its hierarchy.
90
+ # Since the total number of all subsets is very large, all of the following scripts only generate a subset of MetaShift.
91
+
92
+ _CLASSES = [
93
+ "cat",
94
+ "dog",
95
+ "bus",
96
+ "truck",
97
+ "elephant",
98
+ "horse",
99
+ ]
100
+
101
+
102
+ _ATTRIBUTES = [
103
+ "cat(orange)",
104
+ "cat(white)",
105
+ "dog(sitting)",
106
+ "dog(jumping)",
107
+ ]
108
+
109
+
110
+ class MetashiftConfig(datasets.BuilderConfig):
111
+ """BuilderConfig for MetaShift."""
112
+
113
+ def __init__(
114
+ self,
115
+ selected_classes=None,
116
+ attributes_dataset=False,
117
+ attributes=None,
118
+ with_image_metadata=False,
119
+ image_subset_size_threshold=25,
120
+ min_local_groups=5,
121
+ **kwargs,
122
+ ):
123
+ """BuilderConfig for MetaShift.
124
+
125
+ Args:
126
+ selected_classes: `list[string]`, optional, list of the classes to generate the MetaShift dataset for.
127
+ If `None`, the list is equal to `['cat', 'dog', 'bus', 'truck', 'elephant', 'horse']`.
128
+ attributes_dataset: `bool`, default `False`, if `True`, the script generates the MetaShift-Attributes dataset.
129
+ attributes: `list[string]`, optional, list of attributes classes included in the Attributes dataset.
130
+ If `None` and `attributes_dataset` is `True`, it's equal to `["cat(orange)", "cat(white)", "dog(sitting)", "dog(jumping)"]`.
131
+ with_image_metadata: `bool`, default `False`, whether to include image metadata.
132
+ If set to `True`, this will give additional metadata about each image.
133
+ image_subset_size_threshold: `int`, default `25`, the number of images required to be considered a subset.
134
+ If the number of images is less than this threshold, the subset is ignored.
135
+ min_local_groups: `int`, default `5`, the minimum number of local groups required to be considered an object class.
136
+ **kwargs: keyword arguments forwarded to super.
137
+ """
138
+ super(MetashiftConfig, self).__init__(**kwargs)
139
+ self.selected_classes = _CLASSES if selected_classes is None else selected_classes
140
+ self.attributes_dataset = attributes_dataset
141
+ if attributes_dataset:
142
+ self.attributes = _ATTRIBUTES if attributes is None else attributes
143
+ self.with_image_metadata = with_image_metadata
144
+ self.IMAGE_SUBSET_SIZE_THRESHOLD = image_subset_size_threshold
145
+ self.MIN_LOCAL_GROUPS = min_local_groups
146
+
147
+
148
+ class Metashift(datasets.GeneratorBasedBuilder):
149
+
150
+ BUILDER_CONFIGS = [
151
+ MetashiftConfig(name="metashift", version=datasets.Version("1.0.0")),
152
+ ]
153
+
154
+ BUILDER_CONFIG_CLASS = MetashiftConfig
155
+
156
+ def _info(self):
157
+
158
+ return datasets.DatasetInfo(
159
+ description=_DESCRIPTION,
160
+ features=datasets.Features(self._get_feature_types()),
161
+ supervised_keys=("image", "label"),
162
+ homepage=_HOMEPAGE,
163
+ license=_LICENSE,
164
+ citation=_CITATION,
165
+ task_templates=[ImageClassification(image_column="image", label_column="label")],
166
+ )
167
+
168
+ def _get_feature_types(self):
169
+ features = {
170
+ "image_id": datasets.Value("string"),
171
+ "image": datasets.Image(),
172
+ }
173
+
174
+ if self.config.attributes_dataset:
175
+ features.update({"label": datasets.ClassLabel(names=self.config.attributes)})
176
+ else:
177
+ features.update(
178
+ {
179
+ "label": datasets.ClassLabel(names=self.config.selected_classes),
180
+ "context": datasets.Value("string"),
181
+ }
182
+ )
183
+
184
+ if self.config.with_image_metadata:
185
+ features.update(
186
+ {
187
+ "width": datasets.Value("int64"),
188
+ "height": datasets.Value("int64"),
189
+ "location": datasets.Value("string"),
190
+ "weather": datasets.Value("string"),
191
+ "objects": datasets.Sequence(
192
+ {
193
+ "object_id": datasets.Value("string"),
194
+ "name": datasets.Value("string"),
195
+ "x": datasets.Value("int64"),
196
+ "y": datasets.Value("int64"),
197
+ "w": datasets.Value("int64"),
198
+ "h": datasets.Value("int64"),
199
+ "attributes": datasets.Sequence(datasets.Value("string")),
200
+ "relations": datasets.Sequence(
201
+ {
202
+ "name": datasets.Value("string"),
203
+ "object": datasets.Value("string"),
204
+ }
205
+ ),
206
+ }
207
+ ),
208
+ }
209
+ )
210
+
211
+ return features
212
+
213
+ @staticmethod
214
+ def _parse_node_str(node_str):
215
+ tag = node_str.split("(")[-1][:-1]
216
+ subject_str = node_str.split("(")[0].strip()
217
+ return subject_str, tag
218
+
219
+ @staticmethod
220
+ def _load_candidate_subsets(pkl_save_path):
221
+ with open(pkl_save_path, "rb") as pkl_f:
222
+ load_data = pickle.load(pkl_f)
223
+ return load_data
224
+
225
+ def _preprocess_groups(self, pkl_save_path, output_files_flag=False, subject_classes=_CLASSES):
226
+
227
+ IMAGE_SUBSET_SIZE_THRESHOLD = self.config.IMAGE_SUBSET_SIZE_THRESHOLD
228
+ trainsg_dupes = set()
229
+
230
+ ##################################
231
+ # Load cache data
232
+ # Global data dict
233
+ # Consult back to this dict for concrete image IDs.
234
+ ##################################
235
+ node_name_to_img_id = self._load_candidate_subsets(pkl_save_path)
236
+
237
+ ##################################
238
+ # Build a default counter first
239
+ # Data Iteration
240
+ ##################################
241
+ group_name_counter = Counter()
242
+ for node_name in node_name_to_img_id.keys():
243
+ ##################################
244
+ # Apply a threshold: e.g., 100
245
+ ##################################
246
+ imageID_set = node_name_to_img_id[node_name]
247
+ imageID_set = imageID_set - trainsg_dupes
248
+ node_name_to_img_id[node_name] = imageID_set
249
+ if len(imageID_set) >= IMAGE_SUBSET_SIZE_THRESHOLD:
250
+ group_name_counter[node_name] = len(imageID_set)
251
+ else:
252
+ pass
253
+
254
+ most_common_list = group_name_counter.most_common()
255
+
256
+ most_common_list = [(x, count) for x, count in group_name_counter.items()]
257
+
258
+ ##################################
259
+ # Build a subject dict
260
+ ##################################
261
+
262
+ subject_group_summary_dict = defaultdict(Counter)
263
+ for node_name, imageID_set_len in most_common_list:
264
+ subject_str, tag = self._parse_node_str(node_name)
265
+ ##################################
266
+ # TMP: inspect certain class
267
+ ##################################
268
+ if subject_str not in subject_classes:
269
+ continue
270
+
271
+ subject_group_summary_dict[subject_str][node_name] = imageID_set_len
272
+
273
+ ##################################
274
+ # Get the subject dict stats
275
+ ##################################
276
+ subject_group_summary_list = sorted(
277
+ subject_group_summary_dict.items(), key=lambda x: sum(x[1].values()), reverse=True
278
+ )
279
+
280
+ new_subject_group_summary_list = list()
281
+ subjects_to_all_set = defaultdict(set)
282
+
283
+ ##################################
284
+ # Subject filtering for dataset generation
285
+ ##################################
286
+ for subject_str, subject_data in subject_group_summary_list:
287
+
288
+ ##################################
289
+ # Discard an object class if it has too few local groups
290
+ ##################################
291
+ if len(subject_data) <= self.config.MIN_LOCAL_GROUPS:
292
+ # if len(subject_data) <= 10:
293
+ continue
294
+ else:
295
+ new_subject_group_summary_list.append((subject_str, subject_data))
296
+
297
+ ##################################
298
+ # Iterate all the subsets of the given subject
299
+ ##################################
300
+ for node_name in subject_data:
301
+ subjects_to_all_set[node_name].update(node_name_to_img_id[node_name])
302
+
303
+ return subjects_to_all_set
304
+
305
+ @staticmethod
306
+ def _load_scene_graph(json_path):
307
+ with open(json_path, "r", encoding="utf-8") as f:
308
+ scene_graph = json.load(f)
309
+ return scene_graph
310
+
311
+ def _split_generators(self, dl_manager):
312
+ data_path = dl_manager.download_and_extract(_URLS)
313
+ metadata_path = None
314
+ subjects_to_all_set = None
315
+ attributes_path = None
316
+ if not self.config.attributes_dataset:
317
+ metadata_path = dl_manager.download_and_extract(_METADATA_URLS)
318
+ subjects_to_all_set = self._preprocess_groups(
319
+ metadata_path["full_candidate_subsets"], subject_classes=self.config.selected_classes
320
+ )
321
+ else:
322
+ attributes_path = dl_manager.download_and_extract(_ATTRIBUTES_URLS)
323
+
324
+ return [
325
+ datasets.SplitGenerator(
326
+ name=datasets.Split.TRAIN,
327
+ gen_kwargs={
328
+ "images_path": os.path.join(data_path["image_files"], "images"),
329
+ "subjects_to_all_set": subjects_to_all_set,
330
+ "attributes_path": attributes_path,
331
+ "image_metadata_path": data_path["scene_graph_annotations"],
332
+ },
333
+ ),
334
+ ]
335
+
336
+ @staticmethod
337
+ def _get_processed_image_metadata(image_id, scene_graph):
338
+ image_metadata = scene_graph[image_id]
339
+ objects = image_metadata["objects"]
340
+ if isinstance(objects, list):
341
+ return image_metadata
342
+ processed_objects = []
343
+ for object_id, object_details in objects.items():
344
+ object_details["object_id"] = object_id
345
+ processed_objects.append(object_details)
346
+ image_metadata["objects"] = processed_objects
347
+ if "location" not in image_metadata:
348
+ image_metadata["location"] = None
349
+ if "weather" not in image_metadata:
350
+ image_metadata["weather"] = None
351
+
352
+ return image_metadata
353
+
354
+ def _generate_examples(self, images_path, subjects_to_all_set, attributes_path, image_metadata_path):
355
+ idx = 0
356
+ if self.config.with_image_metadata:
357
+ train_scene_graph = os.path.join(image_metadata_path, "train_sceneGraphs.json")
358
+ test_scene_graph = os.path.join(image_metadata_path, "val_sceneGraphs.json")
359
+
360
+ scene_graph = self._load_scene_graph(train_scene_graph)
361
+ scene_graph.update(self._load_scene_graph(test_scene_graph))
362
+
363
+ if not self.config.attributes_dataset:
364
+ for subset in subjects_to_all_set:
365
+ class_name, context = self._parse_node_str(subset)
366
+ for image_id in subjects_to_all_set[subset]:
367
+ image_filename = image_id + ".jpg"
368
+ src_image_path = os.path.join(images_path, image_filename)
369
+ features = {
370
+ "image_id": image_id,
371
+ "image": src_image_path,
372
+ "label": class_name,
373
+ "context": context,
374
+ }
375
+ if self.config.with_image_metadata:
376
+ image_metadata = self._get_processed_image_metadata(image_id, scene_graph)
377
+ features.update(image_metadata)
378
+ yield idx, features
379
+ idx += 1
380
+ else:
381
+ attributes_candidate_subsets = self._load_candidate_subsets(
382
+ attributes_path["attributes_candidate_subsets"]
383
+ )
384
+ for attribute in self.config.attributes:
385
+ image_IDs = attributes_candidate_subsets[attribute]
386
+ for image_id in image_IDs:
387
+ image_filename = image_id + ".jpg"
388
+ src_image_path = os.path.join(images_path, image_filename)
389
+ features = {
390
+ "image_id": image_id,
391
+ "image": src_image_path,
392
+ "label": attribute,
393
+ }
394
+ if self.config.with_image_metadata:
395
+ image_metadata = self._get_processed_image_metadata(image_id, scene_graph)
396
+ features.update(image_metadata)
397
+ yield idx, features
398
+ idx += 1