dali-does commited on
Commit
fb07964
1 Parent(s): f04fb32

Updated dataset card

Browse files
Files changed (1) hide show
  1. README.md +44 -46
README.md CHANGED
@@ -36,13 +36,7 @@ task_ids:
36
  - [Data Fields](#data-fields)
37
  - [Data Splits](#data-splits)
38
  - [Dataset Creation](#dataset-creation)
39
- - [Curation Rationale](#curation-rationale)
40
- - [Source Data](#source-data)
41
- - [Annotations](#annotations)
42
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
  - [Considerations for Using the Data](#considerations-for-using-the-data)
44
- - [Social Impact of Dataset](#social-impact-of-dataset)
45
- - [Discussion of Biases](#discussion-of-biases)
46
  - [Other Known Limitations](#other-known-limitations)
47
  - [Additional Information](#additional-information)
48
  - [Dataset Curators](#dataset-curators)
@@ -60,11 +54,48 @@ task_ids:
60
 
61
  ### Dataset Summary
62
 
63
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64
 
65
  ### Supported Tasks and Leaderboards
66
 
67
- [More Information Needed]
68
 
69
  ### Languages
70
 
@@ -74,11 +105,12 @@ The dataset is currently only available in English. To extend the dataset to oth
74
 
75
  ### Data Instances
76
 
77
- [More Information Needed]
 
 
78
 
79
  ### Data Fields
80
 
81
- [More Information Needed]
82
 
83
  ```
84
  features = datasets.Features(
@@ -94,48 +126,15 @@ features = datasets.Features(
94
 
95
  ### Data Splits
96
 
97
- [More Information Needed]
98
 
99
  ## Dataset Creation
100
 
101
- ### Curation Rationale
102
-
103
- [More Information Needed]
104
-
105
- ### Source Data
106
 
107
- #### Initial Data Collection and Normalization
108
-
109
- [More Information Needed]
110
-
111
- #### Who are the source language producers?
112
-
113
- [More Information Needed]
114
-
115
- ### Annotations
116
-
117
- #### Annotation process
118
-
119
- [More Information Needed]
120
-
121
- #### Who are the annotators?
122
-
123
- [More Information Needed]
124
-
125
- ### Personal and Sensitive Information
126
-
127
- [More Information Needed]
128
 
129
  ## Considerations for Using the Data
130
 
131
- ### Social Impact of Dataset
132
-
133
- [More Information Needed]
134
-
135
- ### Discussion of Biases
136
-
137
- [More Information Needed]
138
-
139
  ### Other Known Limitations
140
 
141
  [More Information Needed]
@@ -144,7 +143,6 @@ features = datasets.Features(
144
 
145
  ### Dataset Curators
146
 
147
- [More Information Needed]
148
  Adam Dahlgren Lindström - dali@cs.umu.se
149
 
150
  ### Licensing Information
36
  - [Data Fields](#data-fields)
37
  - [Data Splits](#data-splits)
38
  - [Dataset Creation](#dataset-creation)
 
 
 
 
39
  - [Considerations for Using the Data](#considerations-for-using-the-data)
 
 
40
  - [Other Known Limitations](#other-known-limitations)
41
  - [Additional Information](#additional-information)
42
  - [Dataset Curators](#dataset-curators)
54
 
55
  ### Dataset Summary
56
 
57
+ Dataset for compositional multimodal mathematical reasoning based on CLEVR.
58
+
59
+ #### Loading the data, preprocessing text with CLIP
60
+
61
+ ```
62
+ from transformers import CLIPPreprocessor
63
+ from datasets import load_dataset, DownloadConfig
64
+
65
+
66
+ dl_config = DownloadConfig(resume_download=True,
67
+ num_proc=8,
68
+ force_download=True)
69
+
70
+ # Load 'general' instance of dataset
71
+ dataset = load_dataset('dali-does/clevr-math', download_config=dl_config)
72
+
73
+ # Load version with only multihop in test data
74
+ dataset_multihop = load_dataset('dali-does/clevr-math', 'multihop',
75
+ download_config=dl_config)
76
+
77
+
78
+ model_path = "openai/clip-vit-base-patch32"
79
+ extractor = CLIPProcessor.from_pretrained(model_path)
80
+ def transform_tokenize(e):
81
+ e['image'] = [image.convert('RGB') for image in e['image']]
82
+ return extractor(text=e['question'],
83
+ images=e['image'],
84
+ padding=True)
85
+
86
+ dataset = dataset.map(transform_tokenize,
87
+ batched=True,
88
+ num_proc=8,
89
+ padding='max_length')
90
+
91
+ dataset_subtraction = dataset.filter(lambda e:
92
+ e['template'].startswith('subtraction'), num_proc=4)
93
+ ```
94
+
95
 
96
  ### Supported Tasks and Leaderboards
97
 
98
+ Leaderboard will be announced at a later date.
99
 
100
  ### Languages
101
 
105
 
106
  ### Data Instances
107
 
108
+ * `general` containing the default version with multihop questions in train and test
109
+ * `multihop` containing multihop questions only in test data to test generalisation of reasoning
110
+
111
 
112
  ### Data Fields
113
 
 
114
 
115
  ```
116
  features = datasets.Features(
126
 
127
  ### Data Splits
128
 
129
+ train/val/test
130
 
131
  ## Dataset Creation
132
 
133
+ Data is generated using code provided with the CLEVR-dataset, using blender and templates constructed by the dataset curators.
 
 
 
 
134
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
  ## Considerations for Using the Data
137
 
 
 
 
 
 
 
 
 
138
  ### Other Known Limitations
139
 
140
  [More Information Needed]
143
 
144
  ### Dataset Curators
145
 
 
146
  Adam Dahlgren Lindström - dali@cs.umu.se
147
 
148
  ### Licensing Information