Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
b1ea4e2
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ convai2_inferred:
4
+ - machine-generated
5
+ funpedia:
6
+ - found
7
+ gendered_words:
8
+ - found
9
+ image_chat:
10
+ - found
11
+ light_inferred:
12
+ - machine-generated
13
+ name_genders:
14
+ - found
15
+ new_data:
16
+ - crowdsourced
17
+ - found
18
+ opensubtitles_inferred:
19
+ - machine-generated
20
+ wizard:
21
+ - found
22
+ yelp_inferred:
23
+ - machine-generated
24
+ language_creators:
25
+ convai2_inferred:
26
+ - found
27
+ funpedia:
28
+ - found
29
+ gendered_words:
30
+ - found
31
+ image_chat:
32
+ - found
33
+ light_inferred:
34
+ - found
35
+ name_genders:
36
+ - found
37
+ new_data:
38
+ - crowdsourced
39
+ - found
40
+ opensubtitles_inferred:
41
+ - found
42
+ wizard:
43
+ - found
44
+ yelp_inferred:
45
+ - found
46
+ languages:
47
+ - en
48
+ licenses:
49
+ - mit
50
+ multilinguality:
51
+ - monolingual
52
+ size_categories:
53
+ convai2_inferred:
54
+ - 100K<n<1M
55
+ funpedia:
56
+ - 10K<n<100K
57
+ gendered_words:
58
+ - n<1K
59
+ image_chat:
60
+ - 100K<n<1M
61
+ light_inferred:
62
+ - 100K<n<1M
63
+ name_genders:
64
+ - n>1M
65
+ new_data:
66
+ - 1K<n<10K
67
+ opensubtitles_inferred:
68
+ - 100K<n<1M
69
+ wizard:
70
+ - 10K<n<100K
71
+ yelp_inferred:
72
+ - n>1M
73
+ source_datasets:
74
+ convai2_inferred:
75
+ - extended|other-convai2
76
+ - original
77
+ funpedia:
78
+ - original
79
+ gendered_words:
80
+ - original
81
+ image_chat:
82
+ - original
83
+ light_inferred:
84
+ - extended|other-light
85
+ - original
86
+ name_genders:
87
+ - original
88
+ new_data:
89
+ - original
90
+ opensubtitles_inferred:
91
+ - extended|other-opensubtitles
92
+ - original
93
+ wizard:
94
+ - original
95
+ yelp_inferred:
96
+ - extended|other-yelp
97
+ - original
98
+ task_categories:
99
+ - text-classification
100
+ task_ids:
101
+ - text-classification-other-gender-bias
102
+ ---
103
+
104
+ # Dataset Card for Multi-Dimensional Gender Bias Classification
105
+
106
+ ## Table of Contents
107
+ - [Dataset Description](#dataset-description)
108
+ - [Dataset Summary](#dataset-summary)
109
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
110
+ - [Languages](#languages)
111
+ - [Dataset Structure](#dataset-structure)
112
+ - [Data Instances](#data-instances)
113
+ - [Data Fields](#data-instances)
114
+ - [Data Splits](#data-instances)
115
+ - [Dataset Creation](#dataset-creation)
116
+ - [Curation Rationale](#curation-rationale)
117
+ - [Source Data](#source-data)
118
+ - [Annotations](#annotations)
119
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
120
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
121
+ - [Social Impact of Dataset](#social-impact-of-dataset)
122
+ - [Discussion of Biases](#discussion-of-biases)
123
+ - [Other Known Limitations](#other-known-limitations)
124
+ - [Additional Information](#additional-information)
125
+ - [Dataset Curators](#dataset-curators)
126
+ - [Licensing Information](#licensing-information)
127
+ - [Citation Information](#citation-information)
128
+
129
+ ## Dataset Description
130
+
131
+ - **Homepage:** https://parl.ai/projects/md_gender/
132
+ - **Repository:** [Needs More Information]
133
+ - **Paper:** https://arxiv.org/abs/2005.00614
134
+ - **Leaderboard:** [Needs More Information]
135
+ - **Point of Contact:** edinan@fb.com
136
+
137
+ ### Dataset Summary
138
+
139
+ Machine learning models are trained to find patterns in data.
140
+ NLP models can inadvertently learn socially undesirable patterns when training on gender biased text.
141
+ In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:
142
+ bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.
143
+ Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
144
+ In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.
145
+ Distinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.
146
+ We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,
147
+ detecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.
148
+
149
+ ### Supported Tasks and Leaderboards
150
+
151
+ [Needs More Information]
152
+
153
+ ### Languages
154
+
155
+ The data is in English (`en`)
156
+
157
+ ## Dataset Structure
158
+
159
+ ### Data Instances
160
+
161
+ [Needs More Information]
162
+
163
+ ### Data Fields
164
+
165
+ The data has the following features.
166
+
167
+ For the `new_data` config:
168
+ - `text`: the text to be classified
169
+ - `original`: the text before reformulation
170
+ - `labels`: a `list` of classification labels, with possible values including `ABOUT:female`, `ABOUT:male`, `PARTNER:female`, `PARTNER:male`, `SELF:female`.
171
+ - `class_type`: a classification label, with possible values including `about`, `partner`, `self`.
172
+ - `turker_gender`: a classification label, with possible values including `man`, `woman`, `nonbinary`, `prefer not to say`, `no answer`.
173
+
174
+ For the other annotated datasets:
175
+ - `text`: the text to be classified.
176
+ - `gender`: a classification label, with possible values including `gender-neutral`, `female`, `male`.
177
+
178
+ For the `_inferred` configurations:
179
+ - `text`: the text to be classified.
180
+ - `binary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`.
181
+ - `binary_score`: a score between 0 and 1.
182
+ - `ternary_label`: a classification label, with possible values including `ABOUT:female`, `ABOUT:male`, `ABOUT:gender-neutral`.
183
+ - `ternary_score`: a score between 0 and 1.
184
+
185
+ ### Data Splits
186
+
187
+ The different parts of the data can be accessed through the different configurations:
188
+ - `gendered_words`: A list of common nouns with a masculine and feminine variant.
189
+ - `new_data`: Sentences reformulated and annotated along all three axes.
190
+ - `funpedia`, `wizard`: Sentences from Funpedia and Wizards of Wikipedia annotated with ABOUT gender with entity gender information.
191
+ - `image_chat`: sentences about images annotated with ABOUT gender based on gender information from the entities in the image
192
+ - `convai2_inferred`, `light_inferred`, `opensubtitles_inferred`, `yelp_inferred`: Data from several source datasets with ABOUT annotations inferred by a trined classifier.
193
+
194
+
195
+ ## Dataset Creation
196
+
197
+ ### Curation Rationale
198
+
199
+ [Needs More Information]
200
+
201
+ ### Source Data
202
+
203
+ #### Initial Data Collection and Normalization
204
+
205
+ [Needs More Information]
206
+
207
+ #### Who are the source language producers?
208
+
209
+ [Needs More Information]
210
+
211
+ ### Annotations
212
+
213
+ #### Annotation process
214
+
215
+ [Needs More Information]
216
+
217
+ #### Who are the annotators?
218
+
219
+ [Needs More Information]
220
+
221
+ ### Personal and Sensitive Information
222
+
223
+ [Needs More Information]
224
+
225
+ ## Considerations for Using the Data
226
+
227
+ ### Social Impact of Dataset
228
+
229
+ [Needs More Information]
230
+
231
+ ### Discussion of Biases
232
+
233
+ [Needs More Information]
234
+
235
+ ### Other Known Limitations
236
+
237
+ [Needs More Information]
238
+
239
+ ## Additional Information
240
+
241
+ ### Dataset Curators
242
+
243
+ [Needs More Information]
244
+
245
+ ### Licensing Information
246
+
247
+ [Needs More Information]
248
+
249
+ ### Citation Information
250
+
251
+ [Needs More Information]
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"gendered_words": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"word_masculine": {"dtype": "string", "id": null, "_type": "Value"}, "word_feminine": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "gendered_words", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4988, "num_examples": 222, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 4988, "size_in_bytes": 232633998}, "name_genders": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"name": {"dtype": "string", "id": null, "_type": "Value"}, "assigned_gender": {"num_classes": 2, "names": ["M", "F"], "names_file": null, "id": null, "_type": "ClassLabel"}, "count": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "name_genders", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"yob1880": {"name": "yob1880", "num_bytes": 43404, "num_examples": 2000, "dataset_name": "md_gender_bias"}, "yob1881": {"name": "yob1881", "num_bytes": 41944, "num_examples": 1935, "dataset_name": "md_gender_bias"}, "yob1882": {"name": "yob1882", "num_bytes": 46211, "num_examples": 2127, "dataset_name": "md_gender_bias"}, "yob1883": {"name": "yob1883", "num_bytes": 45221, "num_examples": 2084, "dataset_name": "md_gender_bias"}, "yob1884": {"name": "yob1884", "num_bytes": 49886, "num_examples": 2297, "dataset_name": "md_gender_bias"}, "yob1885": {"name": "yob1885", "num_bytes": 49810, "num_examples": 2294, "dataset_name": "md_gender_bias"}, "yob1886": {"name": "yob1886", "num_bytes": 51935, "num_examples": 2392, "dataset_name": "md_gender_bias"}, "yob1887": {"name": "yob1887", "num_bytes": 51458, "num_examples": 2373, "dataset_name": "md_gender_bias"}, "yob1888": {"name": "yob1888", "num_bytes": 57531, "num_examples": 2651, "dataset_name": "md_gender_bias"}, "yob1889": {"name": "yob1889", "num_bytes": 56177, "num_examples": 2590, "dataset_name": "md_gender_bias"}, "yob1890": {"name": "yob1890", "num_bytes": 58509, "num_examples": 2695, "dataset_name": "md_gender_bias"}, "yob1891": {"name": "yob1891", "num_bytes": 57767, "num_examples": 2660, "dataset_name": "md_gender_bias"}, "yob1892": {"name": "yob1892", "num_bytes": 63493, "num_examples": 2921, "dataset_name": "md_gender_bias"}, "yob1893": {"name": "yob1893", "num_bytes": 61525, "num_examples": 2831, "dataset_name": "md_gender_bias"}, "yob1894": {"name": "yob1894", "num_bytes": 63927, "num_examples": 2941, "dataset_name": "md_gender_bias"}, "yob1895": {"name": "yob1895", "num_bytes": 66346, "num_examples": 3049, "dataset_name": "md_gender_bias"}, "yob1896": {"name": "yob1896", "num_bytes": 67224, "num_examples": 3091, "dataset_name": "md_gender_bias"}, "yob1897": {"name": "yob1897", "num_bytes": 65886, "num_examples": 3028, "dataset_name": "md_gender_bias"}, "yob1898": {"name": "yob1898", "num_bytes": 71088, "num_examples": 3264, "dataset_name": "md_gender_bias"}, "yob1899": {"name": "yob1899", "num_bytes": 66225, "num_examples": 3042, "dataset_name": "md_gender_bias"}, "yob1900": {"name": "yob1900", "num_bytes": 81305, "num_examples": 3730, "dataset_name": "md_gender_bias"}, "yob1901": {"name": "yob1901", "num_bytes": 68723, "num_examples": 3153, "dataset_name": "md_gender_bias"}, "yob1902": {"name": "yob1902", "num_bytes": 73321, "num_examples": 3362, "dataset_name": "md_gender_bias"}, "yob1903": {"name": "yob1903", "num_bytes": 74019, "num_examples": 3389, "dataset_name": "md_gender_bias"}, "yob1904": {"name": "yob1904", "num_bytes": 77751, "num_examples": 3560, "dataset_name": "md_gender_bias"}, "yob1905": {"name": "yob1905", "num_bytes": 79802, "num_examples": 3655, "dataset_name": "md_gender_bias"}, "yob1906": {"name": "yob1906", "num_bytes": 79392, "num_examples": 3633, "dataset_name": "md_gender_bias"}, "yob1907": {"name": "yob1907", "num_bytes": 86342, "num_examples": 3948, "dataset_name": "md_gender_bias"}, "yob1908": {"name": "yob1908", "num_bytes": 87965, "num_examples": 4018, "dataset_name": "md_gender_bias"}, "yob1909": {"name": "yob1909", "num_bytes": 92591, "num_examples": 4227, "dataset_name": "md_gender_bias"}, "yob1910": {"name": "yob1910", "num_bytes": 101491, "num_examples": 4629, "dataset_name": "md_gender_bias"}, "yob1911": {"name": "yob1911", "num_bytes": 106787, "num_examples": 4867, "dataset_name": "md_gender_bias"}, "yob1912": {"name": "yob1912", "num_bytes": 139448, "num_examples": 6351, "dataset_name": "md_gender_bias"}, "yob1913": {"name": "yob1913", "num_bytes": 153110, "num_examples": 6968, "dataset_name": "md_gender_bias"}, "yob1914": {"name": "yob1914", "num_bytes": 175167, "num_examples": 7965, "dataset_name": "md_gender_bias"}, "yob1915": {"name": "yob1915", "num_bytes": 205921, "num_examples": 9357, "dataset_name": "md_gender_bias"}, "yob1916": {"name": "yob1916", "num_bytes": 213468, "num_examples": 9696, "dataset_name": "md_gender_bias"}, "yob1917": {"name": "yob1917", "num_bytes": 218446, "num_examples": 9913, "dataset_name": "md_gender_bias"}, "yob1918": {"name": "yob1918", "num_bytes": 229209, "num_examples": 10398, "dataset_name": "md_gender_bias"}, "yob1919": {"name": "yob1919", "num_bytes": 228656, "num_examples": 10369, "dataset_name": "md_gender_bias"}, "yob1920": {"name": "yob1920", "num_bytes": 237286, "num_examples": 10756, "dataset_name": "md_gender_bias"}, "yob1921": {"name": "yob1921", "num_bytes": 239616, "num_examples": 10857, "dataset_name": "md_gender_bias"}, "yob1922": {"name": "yob1922", "num_bytes": 237569, "num_examples": 10756, "dataset_name": "md_gender_bias"}, "yob1923": {"name": "yob1923", "num_bytes": 235046, "num_examples": 10643, "dataset_name": "md_gender_bias"}, "yob1924": {"name": "yob1924", "num_bytes": 240113, "num_examples": 10869, "dataset_name": "md_gender_bias"}, "yob1925": {"name": "yob1925", "num_bytes": 235098, "num_examples": 10638, "dataset_name": "md_gender_bias"}, "yob1926": {"name": "yob1926", "num_bytes": 230970, "num_examples": 10458, "dataset_name": "md_gender_bias"}, "yob1927": {"name": "yob1927", "num_bytes": 230004, "num_examples": 10406, "dataset_name": "md_gender_bias"}, "yob1928": {"name": "yob1928", "num_bytes": 224583, "num_examples": 10159, "dataset_name": "md_gender_bias"}, "yob1929": {"name": "yob1929", "num_bytes": 217057, "num_examples": 9820, "dataset_name": "md_gender_bias"}, "yob1930": {"name": "yob1930", "num_bytes": 216352, "num_examples": 9791, "dataset_name": "md_gender_bias"}, "yob1931": {"name": "yob1931", "num_bytes": 205361, "num_examples": 9298, "dataset_name": "md_gender_bias"}, "yob1932": {"name": "yob1932", "num_bytes": 207268, "num_examples": 9381, "dataset_name": "md_gender_bias"}, "yob1933": {"name": "yob1933", "num_bytes": 199031, "num_examples": 9013, "dataset_name": "md_gender_bias"}, "yob1934": {"name": "yob1934", "num_bytes": 202758, "num_examples": 9180, "dataset_name": "md_gender_bias"}, "yob1935": {"name": "yob1935", "num_bytes": 199614, "num_examples": 9037, "dataset_name": "md_gender_bias"}, "yob1936": {"name": "yob1936", "num_bytes": 196379, "num_examples": 8894, "dataset_name": "md_gender_bias"}, "yob1937": {"name": "yob1937", "num_bytes": 197757, "num_examples": 8946, "dataset_name": "md_gender_bias"}, "yob1938": {"name": "yob1938", "num_bytes": 199603, "num_examples": 9032, "dataset_name": "md_gender_bias"}, "yob1939": {"name": "yob1939", "num_bytes": 196979, "num_examples": 8918, "dataset_name": "md_gender_bias"}, "yob1940": {"name": "yob1940", "num_bytes": 198141, "num_examples": 8961, "dataset_name": "md_gender_bias"}, "yob1941": {"name": "yob1941", "num_bytes": 200858, "num_examples": 9085, "dataset_name": "md_gender_bias"}, "yob1942": {"name": "yob1942", "num_bytes": 208363, "num_examples": 9425, "dataset_name": "md_gender_bias"}, "yob1943": {"name": "yob1943", "num_bytes": 207940, "num_examples": 9408, "dataset_name": "md_gender_bias"}, "yob1944": {"name": "yob1944", "num_bytes": 202227, "num_examples": 9152, "dataset_name": "md_gender_bias"}, "yob1945": {"name": "yob1945", "num_bytes": 199478, "num_examples": 9025, "dataset_name": "md_gender_bias"}, "yob1946": {"name": "yob1946", "num_bytes": 214614, "num_examples": 9705, "dataset_name": "md_gender_bias"}, "yob1947": {"name": "yob1947", "num_bytes": 229327, "num_examples": 10371, "dataset_name": "md_gender_bias"}, "yob1948": {"name": "yob1948", "num_bytes": 226615, "num_examples": 10241, "dataset_name": "md_gender_bias"}, "yob1949": {"name": "yob1949", "num_bytes": 227278, "num_examples": 10269, "dataset_name": "md_gender_bias"}, "yob1950": {"name": "yob1950", "num_bytes": 227946, "num_examples": 10303, "dataset_name": "md_gender_bias"}, "yob1951": {"name": "yob1951", "num_bytes": 231613, "num_examples": 10462, "dataset_name": "md_gender_bias"}, "yob1952": {"name": "yob1952", "num_bytes": 235483, "num_examples": 10646, "dataset_name": "md_gender_bias"}, "yob1953": {"name": "yob1953", "num_bytes": 239654, "num_examples": 10837, "dataset_name": "md_gender_bias"}, "yob1954": {"name": "yob1954", "num_bytes": 242389, "num_examples": 10968, "dataset_name": "md_gender_bias"}, "yob1955": {"name": "yob1955", "num_bytes": 245652, "num_examples": 11115, "dataset_name": "md_gender_bias"}, "yob1956": {"name": "yob1956", "num_bytes": 250674, "num_examples": 11340, "dataset_name": "md_gender_bias"}, "yob1957": {"name": "yob1957", "num_bytes": 255370, "num_examples": 11564, "dataset_name": "md_gender_bias"}, "yob1958": {"name": "yob1958", "num_bytes": 254520, "num_examples": 11522, "dataset_name": "md_gender_bias"}, "yob1959": {"name": "yob1959", "num_bytes": 260051, "num_examples": 11767, "dataset_name": "md_gender_bias"}, "yob1960": {"name": "yob1960", "num_bytes": 263474, "num_examples": 11921, "dataset_name": "md_gender_bias"}, "yob1961": {"name": "yob1961", "num_bytes": 269493, "num_examples": 12182, "dataset_name": "md_gender_bias"}, "yob1962": {"name": "yob1962", "num_bytes": 270244, "num_examples": 12209, "dataset_name": "md_gender_bias"}, "yob1963": {"name": "yob1963", "num_bytes": 271872, "num_examples": 12282, "dataset_name": "md_gender_bias"}, "yob1964": {"name": "yob1964", "num_bytes": 274590, "num_examples": 12397, "dataset_name": "md_gender_bias"}, "yob1965": {"name": "yob1965", "num_bytes": 264889, "num_examples": 11952, "dataset_name": "md_gender_bias"}, "yob1966": {"name": "yob1966", "num_bytes": 269321, "num_examples": 12151, "dataset_name": "md_gender_bias"}, "yob1967": {"name": "yob1967", "num_bytes": 274867, "num_examples": 12397, "dataset_name": "md_gender_bias"}, "yob1968": {"name": "yob1968", "num_bytes": 286774, "num_examples": 12936, "dataset_name": "md_gender_bias"}, "yob1969": {"name": "yob1969", "num_bytes": 304909, "num_examples": 13749, "dataset_name": "md_gender_bias"}, "yob1970": {"name": "yob1970", "num_bytes": 328047, "num_examples": 14779, "dataset_name": "md_gender_bias"}, "yob1971": {"name": "yob1971", "num_bytes": 339657, "num_examples": 15295, "dataset_name": "md_gender_bias"}, "yob1972": {"name": "yob1972", "num_bytes": 342321, "num_examples": 15412, "dataset_name": "md_gender_bias"}, "yob1973": {"name": "yob1973", "num_bytes": 348414, "num_examples": 15682, "dataset_name": "md_gender_bias"}, "yob1974": {"name": "yob1974", "num_bytes": 361188, "num_examples": 16249, "dataset_name": "md_gender_bias"}, "yob1975": {"name": "yob1975", "num_bytes": 376491, "num_examples": 16944, "dataset_name": "md_gender_bias"}, "yob1976": {"name": "yob1976", "num_bytes": 386565, "num_examples": 17391, "dataset_name": "md_gender_bias"}, "yob1977": {"name": "yob1977", "num_bytes": 403994, "num_examples": 18175, "dataset_name": "md_gender_bias"}, "yob1978": {"name": "yob1978", "num_bytes": 405430, "num_examples": 18231, "dataset_name": "md_gender_bias"}, "yob1979": {"name": "yob1979", "num_bytes": 423423, "num_examples": 19039, "dataset_name": "md_gender_bias"}, "yob1980": {"name": "yob1980", "num_bytes": 432317, "num_examples": 19452, "dataset_name": "md_gender_bias"}, "yob1981": {"name": "yob1981", "num_bytes": 432980, "num_examples": 19475, "dataset_name": "md_gender_bias"}, "yob1982": {"name": "yob1982", "num_bytes": 437986, "num_examples": 19694, "dataset_name": "md_gender_bias"}, "yob1983": {"name": "yob1983", "num_bytes": 431531, "num_examples": 19407, "dataset_name": "md_gender_bias"}, "yob1984": {"name": "yob1984", "num_bytes": 434085, "num_examples": 19506, "dataset_name": "md_gender_bias"}, "yob1985": {"name": "yob1985", "num_bytes": 447113, "num_examples": 20085, "dataset_name": "md_gender_bias"}, "yob1986": {"name": "yob1986", "num_bytes": 460315, "num_examples": 20657, "dataset_name": "md_gender_bias"}, "yob1987": {"name": "yob1987", "num_bytes": 477677, "num_examples": 21406, "dataset_name": "md_gender_bias"}, "yob1988": {"name": "yob1988", "num_bytes": 499347, "num_examples": 22367, "dataset_name": "md_gender_bias"}, "yob1989": {"name": "yob1989", "num_bytes": 531020, "num_examples": 23775, "dataset_name": "md_gender_bias"}, "yob1990": {"name": "yob1990", "num_bytes": 552114, "num_examples": 24716, "dataset_name": "md_gender_bias"}, "yob1991": {"name": "yob1991", "num_bytes": 560932, "num_examples": 25109, "dataset_name": "md_gender_bias"}, "yob1992": {"name": "yob1992", "num_bytes": 568151, "num_examples": 25427, "dataset_name": "md_gender_bias"}, "yob1993": {"name": "yob1993", "num_bytes": 579778, "num_examples": 25966, "dataset_name": "md_gender_bias"}, "yob1994": {"name": "yob1994", "num_bytes": 580223, "num_examples": 25997, "dataset_name": "md_gender_bias"}, "yob1995": {"name": "yob1995", "num_bytes": 581949, "num_examples": 26080, "dataset_name": "md_gender_bias"}, "yob1996": {"name": "yob1996", "num_bytes": 589131, "num_examples": 26423, "dataset_name": "md_gender_bias"}, "yob1997": {"name": "yob1997", "num_bytes": 601284, "num_examples": 26970, "dataset_name": "md_gender_bias"}, "yob1998": {"name": "yob1998", "num_bytes": 621587, "num_examples": 27902, "dataset_name": "md_gender_bias"}, "yob1999": {"name": "yob1999", "num_bytes": 635355, "num_examples": 28552, "dataset_name": "md_gender_bias"}, "yob2000": {"name": "yob2000", "num_bytes": 662398, "num_examples": 29772, "dataset_name": "md_gender_bias"}, "yob2001": {"name": "yob2001", "num_bytes": 673111, "num_examples": 30274, "dataset_name": "md_gender_bias"}, "yob2002": {"name": "yob2002", "num_bytes": 679392, "num_examples": 30564, "dataset_name": "md_gender_bias"}, "yob2003": {"name": "yob2003", "num_bytes": 692931, "num_examples": 31185, "dataset_name": "md_gender_bias"}, "yob2004": {"name": "yob2004", "num_bytes": 711776, "num_examples": 32048, "dataset_name": "md_gender_bias"}, "yob2005": {"name": "yob2005", "num_bytes": 723065, "num_examples": 32549, "dataset_name": "md_gender_bias"}, "yob2006": {"name": "yob2006", "num_bytes": 757620, "num_examples": 34088, "dataset_name": "md_gender_bias"}, "yob2007": {"name": "yob2007", "num_bytes": 776893, "num_examples": 34961, "dataset_name": "md_gender_bias"}, "yob2008": {"name": "yob2008", "num_bytes": 779403, "num_examples": 35079, "dataset_name": "md_gender_bias"}, "yob2009": {"name": "yob2009", "num_bytes": 771032, "num_examples": 34709, "dataset_name": "md_gender_bias"}, "yob2010": {"name": "yob2010", "num_bytes": 756717, "num_examples": 34073, "dataset_name": "md_gender_bias"}, "yob2011": {"name": "yob2011", "num_bytes": 752804, "num_examples": 33908, "dataset_name": "md_gender_bias"}, "yob2012": {"name": "yob2012", "num_bytes": 748915, "num_examples": 33747, "dataset_name": "md_gender_bias"}, "yob2013": {"name": "yob2013", "num_bytes": 738288, "num_examples": 33282, "dataset_name": "md_gender_bias"}, "yob2014": {"name": "yob2014", "num_bytes": 737219, "num_examples": 33243, "dataset_name": "md_gender_bias"}, "yob2015": {"name": "yob2015", "num_bytes": 734183, "num_examples": 33121, "dataset_name": "md_gender_bias"}, "yob2016": {"name": "yob2016", "num_bytes": 731291, "num_examples": 33010, "dataset_name": "md_gender_bias"}, "yob2017": {"name": "yob2017", "num_bytes": 721444, "num_examples": 32590, "dataset_name": "md_gender_bias"}, "yob2018": {"name": "yob2018", "num_bytes": 708657, "num_examples": 32033, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 43393095, "size_in_bytes": 276022105}, "new_data": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "original": {"dtype": "string", "id": null, "_type": "Value"}, "labels": [{"num_classes": 6, "names": ["ABOUT:female", "ABOUT:male", "PARTNER:female", "PARTNER:male", "SELF:female", "SELF:male"], "names_file": null, "id": null, "_type": "ClassLabel"}], "class_type": {"num_classes": 3, "names": ["about", "partner", "self"], "names_file": null, "id": null, "_type": "ClassLabel"}, "turker_gender": {"num_classes": 5, "names": ["man", "woman", "nonbinary", "prefer not to say", "no answer"], "names_file": null, "id": null, "_type": "ClassLabel"}, "episode_done": {"dtype": "bool_", "id": null, "_type": "Value"}, "confidence": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "new_data", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 369753, "num_examples": 2345, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 369753, "size_in_bytes": 232998763}, "funpedia": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "persona": {"dtype": "string", "id": null, "_type": "Value"}, "gender": {"num_classes": 3, "names": ["gender-neutral", "female", "male"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "funpedia", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3225542, "num_examples": 23897, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 402205, "num_examples": 2984, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 396417, "num_examples": 2938, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 4024164, "size_in_bytes": 236653174}, "image_chat": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"caption": {"dtype": "string", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}, "male": {"dtype": "bool_", "id": null, "_type": "Value"}, "female": {"dtype": "bool_", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "image_chat", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1061285, "num_examples": 9997, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 35868670, "num_examples": 338180, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 530126, "num_examples": 5000, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 37460081, "size_in_bytes": 270089091}, "wizard": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "chosen_topic": {"dtype": "string", "id": null, "_type": "Value"}, "gender": {"num_classes": 3, "names": ["gender-neutral", "female", "male"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "wizard", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1158785, "num_examples": 10449, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 57824, "num_examples": 537, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 53126, "num_examples": 470, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 1269735, "size_in_bytes": 233898745}, "convai2_inferred": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "binary_label": {"num_classes": 2, "names": ["ABOUT:female", "ABOUT:male"], "names_file": null, "id": null, "_type": "ClassLabel"}, "binary_score": {"dtype": "float32", "id": null, "_type": "Value"}, "ternary_label": {"num_classes": 3, "names": ["ABOUT:female", "ABOUT:male", "ABOUT:gender-neutral"], "names_file": null, "id": null, "_type": "ClassLabel"}, "ternary_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "convai2_inferred", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9853669, "num_examples": 131438, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 608046, "num_examples": 7801, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 608046, "num_examples": 7801, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 11069761, "size_in_bytes": 243698771}, "light_inferred": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "binary_label": {"num_classes": 2, "names": ["ABOUT:female", "ABOUT:male"], "names_file": null, "id": null, "_type": "ClassLabel"}, "binary_score": {"dtype": "float32", "id": null, "_type": "Value"}, "ternary_label": {"num_classes": 3, "names": ["ABOUT:female", "ABOUT:male", "ABOUT:gender-neutral"], "names_file": null, "id": null, "_type": "ClassLabel"}, "ternary_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "light_inferred", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10931355, "num_examples": 106122, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 679692, "num_examples": 6362, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 1375745, "num_examples": 12765, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 12986792, "size_in_bytes": 245615802}, "opensubtitles_inferred": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "binary_label": {"num_classes": 2, "names": ["ABOUT:female", "ABOUT:male"], "names_file": null, "id": null, "_type": "ClassLabel"}, "binary_score": {"dtype": "float32", "id": null, "_type": "Value"}, "ternary_label": {"num_classes": 3, "names": ["ABOUT:female", "ABOUT:male", "ABOUT:gender-neutral"], "names_file": null, "id": null, "_type": "ClassLabel"}, "ternary_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "opensubtitles_inferred", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 27966476, "num_examples": 351036, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 3363802, "num_examples": 41957, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 3830528, "num_examples": 49108, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 35160806, "size_in_bytes": 267789816}, "yelp_inferred": {"description": "Machine learning models are trained to find patterns in data.\nNLP models can inadvertently learn socially undesirable patterns when training on gender biased text.\nIn this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:\nbias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.\nUsing this fine-grained framework, we automatically annotate eight large scale datasets with gender information.\nIn addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.\nDistinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.\nWe show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,\ndetecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.\n", "citation": "@inproceedings{md_gender_bias,\n author = {Emily Dinan and\n Angela Fan and\n Ledell Wu and\n Jason Weston and\n Douwe Kiela and\n Adina Williams},\n editor = {Bonnie Webber and\n Trevor Cohn and\n Yulan He and\n Yang Liu},\n title = {Multi-Dimensional Gender Bias Classification},\n booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural\n Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},\n pages = {314--331},\n publisher = {Association for Computational Linguistics},\n year = {2020},\n url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}\n}\n", "homepage": "https://parl.ai/projects/md_gender/", "license": "MIT License", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "binary_label": {"num_classes": 2, "names": ["ABOUT:female", "ABOUT:male"], "names_file": null, "id": null, "_type": "ClassLabel"}, "binary_score": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "md_gender_bias", "config_name": "yelp_inferred", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 260582945, "num_examples": 2577862, "dataset_name": "md_gender_bias"}, "validation": {"name": "validation", "num_bytes": 324349, "num_examples": 4492, "dataset_name": "md_gender_bias"}, "test": {"name": "test", "num_bytes": 53887700, "num_examples": 534460, "dataset_name": "md_gender_bias"}}, "download_checksums": {"http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz": {"num_bytes": 232629010, "checksum": "c2c03257c53497b9e453600201fc7245b55dec1d98965093b4657fdb54822e9d"}}, "download_size": 232629010, "post_processing_size": null, "dataset_size": 314794994, "size_in_bytes": 547424004}}
dummy/convai2_inferred/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd5fe75c22d3ad2aa9be6f40c3048a67d950f7173d28484772b3bc4ffab379f1
3
+ size 21054
dummy/funpedia/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:588d083b3055694557ded3c576b14a9a21948ad2b45f5520a02046363493cfbb
3
+ size 21054
dummy/gendered_words/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c03c0e060bfacad4bc7ea29ca4d0087288042552f5345ece6415cfac819f836
3
+ size 21054
dummy/image_chat/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c85a5e125c11927219c6078230747be0ab4ed8283a3cc71a97b49debe3d872dc
3
+ size 21054
dummy/light_inferred/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e587a526d23eb565a4f93a76ad7a0a877efb9c7a337ad99e31d942810d64693
3
+ size 21054
dummy/name_genders/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31924a7d5477faf2db6945c9ea41e4ed03f90e6f543a94a56e718c884a0ce5ed
3
+ size 56490
dummy/new_data/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7d76ecbe7368e6e1cafb960bcff9688b34495c36f2ead68e41542f69ee5f6b7
3
+ size 21054
dummy/opensubtitles_inferred/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4453ed97b5c1069a5d95f5089569c7ded6cd6222d1ae79826cc0d9958e58a54c
3
+ size 21054
dummy/wizard/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bf7813fee98e70c908c3eca27b33081447ba09ae4b79e23949b118f16b2c4b9
3
+ size 21054
dummy/yelp_inferred/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ffa7355ecd8215f9e3d50f5cc7b63cac07bef916f04829bca4e131d97a7abc1
3
+ size 21054
md_gender_bias.py ADDED
@@ -0,0 +1,410 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Multi-Dimensional Gender Bias classification"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @inproceedings{md_gender_bias,
29
+ author = {Emily Dinan and
30
+ Angela Fan and
31
+ Ledell Wu and
32
+ Jason Weston and
33
+ Douwe Kiela and
34
+ Adina Williams},
35
+ editor = {Bonnie Webber and
36
+ Trevor Cohn and
37
+ Yulan He and
38
+ Yang Liu},
39
+ title = {Multi-Dimensional Gender Bias Classification},
40
+ booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural
41
+ Language Processing, {EMNLP} 2020, Online, November 16-20, 2020},
42
+ pages = {314--331},
43
+ publisher = {Association for Computational Linguistics},
44
+ year = {2020},
45
+ url = {https://www.aclweb.org/anthology/2020.emnlp-main.23/}
46
+ }
47
+ """
48
+
49
+ # TODO: Add description of the dataset here
50
+ # You can copy an official description
51
+ _DESCRIPTION = """\
52
+ Machine learning models are trained to find patterns in data.
53
+ NLP models can inadvertently learn socially undesirable patterns when training on gender biased text.
54
+ In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions:
55
+ bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker.
56
+ Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
57
+ In addition, we collect a novel, crowdsourced evaluation benchmark of utterance-level gender rewrites.
58
+ Distinguishing between gender bias along multiple dimensions is important, as it enables us to train finer-grained gender bias classifiers.
59
+ We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models,
60
+ detecting gender bias in arbitrary text, and shed light on offensive language in terms of genderedness.
61
+ """
62
+
63
+ _HOMEPAGE = "https://parl.ai/projects/md_gender/"
64
+
65
+ _LICENSE = "MIT License"
66
+
67
+ _URL = "http://parl.ai/downloads/md_gender/gend_multiclass_10072020.tgz"
68
+
69
+ _CONF_FILES = {
70
+ "funpedia": {
71
+ "train": "funpedia/train.jsonl",
72
+ "validation": "funpedia/valid.jsonl",
73
+ "test": "funpedia/test.jsonl",
74
+ },
75
+ "image_chat": {
76
+ "train": "image_chat/engaging_imagechat_gender_captions_hashed.test.jsonl",
77
+ "validation": "image_chat/engaging_imagechat_gender_captions_hashed.train.jsonl",
78
+ "test": "image_chat/engaging_imagechat_gender_captions_hashed.valid.jsonl",
79
+ },
80
+ "wizard": {
81
+ "train": "wizard/train.jsonl",
82
+ "validation": "wizard/valid.jsonl",
83
+ "test": "wizard/test.jsonl",
84
+ },
85
+ "convai2_inferred": {
86
+ "train": (
87
+ "inferred_about/convai2_train_binary.txt",
88
+ "inferred_about/convai2_train.txt",
89
+ ),
90
+ "validation": (
91
+ "inferred_about/convai2_valid_binary.txt",
92
+ "inferred_about/convai2_valid.txt",
93
+ ),
94
+ "test": (
95
+ "inferred_about/convai2_test_binary.txt",
96
+ "inferred_about/convai2_test.txt",
97
+ ),
98
+ },
99
+ "light_inferred": {
100
+ "train": (
101
+ "inferred_about/light_train_binary.txt",
102
+ "inferred_about/light_train.txt",
103
+ ),
104
+ "validation": (
105
+ "inferred_about/light_valid_binary.txt",
106
+ "inferred_about/light_valid.txt",
107
+ ),
108
+ "test": (
109
+ "inferred_about/light_test_binary.txt",
110
+ "inferred_about/light_test.txt",
111
+ ),
112
+ },
113
+ "opensubtitles_inferred": {
114
+ "train": (
115
+ "inferred_about/opensubtitles_train_binary.txt",
116
+ "inferred_about/opensubtitles_train.txt",
117
+ ),
118
+ "validation": (
119
+ "inferred_about/opensubtitles_valid_binary.txt",
120
+ "inferred_about/opensubtitles_valid.txt",
121
+ ),
122
+ "test": (
123
+ "inferred_about/opensubtitles_test_binary.txt",
124
+ "inferred_about/opensubtitles_test.txt",
125
+ ),
126
+ },
127
+ "yelp_inferred": {
128
+ "train": (
129
+ "inferred_about/yelp_train_binary.txt",
130
+ "",
131
+ ),
132
+ "validation": (
133
+ "inferred_about/yelp_valid_binary.txt",
134
+ "",
135
+ ),
136
+ "test": (
137
+ "inferred_about/yelp_test_binary.txt",
138
+ "",
139
+ ),
140
+ },
141
+ }
142
+
143
+
144
+ class MdGenderBias(datasets.GeneratorBasedBuilder):
145
+ """Multi-Dimensional Gender Bias classification"""
146
+
147
+ VERSION = datasets.Version("1.0.0")
148
+
149
+ BUILDER_CONFIGS = [
150
+ datasets.BuilderConfig(
151
+ name="gendered_words",
152
+ version=VERSION,
153
+ description="A list of common nouns with a masculine and feminine variant.",
154
+ ),
155
+ datasets.BuilderConfig(
156
+ name="name_genders",
157
+ version=VERSION,
158
+ description="A list of first names with their gender attribution by year in the US.",
159
+ ),
160
+ datasets.BuilderConfig(
161
+ name="new_data", version=VERSION, description="Some data reformulated and annotated along all three axes."
162
+ ),
163
+ datasets.BuilderConfig(
164
+ name="funpedia",
165
+ version=VERSION,
166
+ description="Data from Funpedia with ABOUT annotations based on Funpedia information on an entity's gender.",
167
+ ),
168
+ datasets.BuilderConfig(
169
+ name="image_chat",
170
+ version=VERSION,
171
+ description="Data from ImageChat with ABOUT annotations based on image recognition.",
172
+ ),
173
+ datasets.BuilderConfig(
174
+ name="wizard",
175
+ version=VERSION,
176
+ description="Data from WizardsOfWikipedia with ABOUT annotations based on Wikipedia information on an entity's gender.",
177
+ ),
178
+ datasets.BuilderConfig(
179
+ name="convai2_inferred",
180
+ version=VERSION,
181
+ description="Data from the ConvAI2 challenge with ABOUT annotations inferred by a trined classifier.",
182
+ ),
183
+ datasets.BuilderConfig(
184
+ name="light_inferred",
185
+ version=VERSION,
186
+ description="Data from LIGHT with ABOUT annotations inferred by a trined classifier.",
187
+ ),
188
+ datasets.BuilderConfig(
189
+ name="opensubtitles_inferred",
190
+ version=VERSION,
191
+ description="Data from OpenSubtitles with ABOUT annotations inferred by a trined classifier.",
192
+ ),
193
+ datasets.BuilderConfig(
194
+ name="yelp_inferred",
195
+ version=VERSION,
196
+ description="Data from Yelp reviews with ABOUT annotations inferred by a trined classifier.",
197
+ ),
198
+ ]
199
+
200
+ DEFAULT_CONFIG_NAME = (
201
+ "new_data" # It's not mandatory to have a default configuration. Just use one if it make sense.
202
+ )
203
+
204
+ def _info(self):
205
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
206
+ if (
207
+ self.config.name == "gendered_words"
208
+ ): # This is the name of the configuration selected in BUILDER_CONFIGS above
209
+ features = datasets.Features(
210
+ {
211
+ "word_masculine": datasets.Value("string"),
212
+ "word_feminine": datasets.Value("string"),
213
+ }
214
+ )
215
+ elif self.config.name == "name_genders":
216
+ features = datasets.Features(
217
+ {
218
+ "name": datasets.Value("string"),
219
+ "assigned_gender": datasets.ClassLabel(names=["M", "F"]),
220
+ "count": datasets.Value("int32"),
221
+ }
222
+ )
223
+ elif self.config.name == "new_data":
224
+ features = datasets.Features(
225
+ {
226
+ "text": datasets.Value("string"),
227
+ "original": datasets.Value("string"),
228
+ "labels": [
229
+ datasets.ClassLabel(
230
+ names=[
231
+ "ABOUT:female",
232
+ "ABOUT:male",
233
+ "PARTNER:female",
234
+ "PARTNER:male",
235
+ "SELF:female",
236
+ "SELF:male",
237
+ ]
238
+ )
239
+ ],
240
+ "class_type": datasets.ClassLabel(names=["about", "partner", "self"]),
241
+ "turker_gender": datasets.ClassLabel(
242
+ names=["man", "woman", "nonbinary", "prefer not to say", "no answer"]
243
+ ),
244
+ "episode_done": datasets.Value("bool_"),
245
+ "confidence": datasets.Value("string"),
246
+ }
247
+ )
248
+ elif self.config.name == "funpedia":
249
+ features = datasets.Features(
250
+ {
251
+ "text": datasets.Value("string"),
252
+ "title": datasets.Value("string"),
253
+ "persona": datasets.Value("string"),
254
+ "gender": datasets.ClassLabel(names=["gender-neutral", "female", "male"]),
255
+ }
256
+ )
257
+ elif self.config.name == "image_chat":
258
+ features = datasets.Features(
259
+ {
260
+ "caption": datasets.Value("string"),
261
+ "id": datasets.Value("string"),
262
+ "male": datasets.Value("bool_"),
263
+ "female": datasets.Value("bool_"),
264
+ }
265
+ )
266
+ elif self.config.name == "wizard":
267
+ features = datasets.Features(
268
+ {
269
+ "text": datasets.Value("string"),
270
+ "chosen_topic": datasets.Value("string"),
271
+ "gender": datasets.ClassLabel(names=["gender-neutral", "female", "male"]),
272
+ }
273
+ )
274
+ elif self.config.name == "yelp_inferred":
275
+ features = datasets.Features(
276
+ {
277
+ "text": datasets.Value("string"),
278
+ "binary_label": datasets.ClassLabel(names=["ABOUT:female", "ABOUT:male"]),
279
+ "binary_score": datasets.Value("float"),
280
+ }
281
+ )
282
+ else: # data with inferred labels
283
+ features = datasets.Features(
284
+ {
285
+ "text": datasets.Value("string"),
286
+ "binary_label": datasets.ClassLabel(names=["ABOUT:female", "ABOUT:male"]),
287
+ "binary_score": datasets.Value("float"),
288
+ "ternary_label": datasets.ClassLabel(names=["ABOUT:female", "ABOUT:male", "ABOUT:gender-neutral"]),
289
+ "ternary_score": datasets.Value("float"),
290
+ }
291
+ )
292
+ return datasets.DatasetInfo(
293
+ description=_DESCRIPTION,
294
+ features=features, # Here we define them above because they are different between the two configurations
295
+ supervised_keys=None,
296
+ homepage=_HOMEPAGE,
297
+ license=_LICENSE,
298
+ citation=_CITATION,
299
+ )
300
+
301
+ def _split_generators(self, dl_manager):
302
+ """Returns SplitGenerators."""
303
+ data_dir = os.path.join(dl_manager.download_and_extract(_URL), "data_to_release")
304
+ if self.config.name == "gendered_words":
305
+ return [
306
+ datasets.SplitGenerator(
307
+ name=datasets.Split.TRAIN,
308
+ gen_kwargs={
309
+ "filepath": None,
310
+ "filepath_pair": (
311
+ os.path.join(data_dir, "word_list/male_word_file.txt"),
312
+ os.path.join(data_dir, "word_list/female_word_file.txt"),
313
+ ),
314
+ },
315
+ )
316
+ ]
317
+ elif self.config.name == "name_genders":
318
+ return [
319
+ datasets.SplitGenerator(
320
+ name=f"yob{yob}",
321
+ gen_kwargs={
322
+ "filepath": os.path.join(data_dir, f"names/yob{yob}.txt"),
323
+ "filepath_pair": None,
324
+ },
325
+ )
326
+ for yob in range(1880, 2019)
327
+ ]
328
+ elif self.config.name == "new_data":
329
+ return [
330
+ datasets.SplitGenerator(
331
+ name=datasets.Split.TRAIN,
332
+ gen_kwargs={
333
+ "filepath": os.path.join(data_dir, "new_data/data.jsonl"),
334
+ "filepath_pair": None,
335
+ },
336
+ )
337
+ ]
338
+ elif self.config.name in ["funpedia", "image_chat", "wizard"]:
339
+ return [
340
+ datasets.SplitGenerator(
341
+ name=spl,
342
+ gen_kwargs={
343
+ "filepath": os.path.join(data_dir, fname),
344
+ "filepath_pair": None,
345
+ },
346
+ )
347
+ for spl, fname in _CONF_FILES[self.config.name].items()
348
+ ]
349
+ else:
350
+ return [
351
+ datasets.SplitGenerator(
352
+ name=spl,
353
+ gen_kwargs={
354
+ "filepath": None,
355
+ "filepath_pair": (
356
+ os.path.join(data_dir, fname_1),
357
+ os.path.join(data_dir, fname_2),
358
+ ),
359
+ },
360
+ )
361
+ for spl, (fname_1, fname_2) in _CONF_FILES[self.config.name].items()
362
+ ]
363
+
364
+ def _generate_examples(self, filepath, filepath_pair):
365
+ if self.config.name == "gendered_words":
366
+ with open(filepath_pair[0], encoding="utf-8") as f_m:
367
+ with open(filepath_pair[1], encoding="utf-8") as f_f:
368
+ for id_, (l_m, l_f) in enumerate(zip(f_m, f_f)):
369
+ yield id_, {
370
+ "word_masculine": l_m.strip(),
371
+ "word_feminine": l_f.strip(),
372
+ }
373
+ elif self.config.name == "name_genders":
374
+ with open(filepath, encoding="utf-8") as f:
375
+ for id_, line in enumerate(f):
376
+ name, g, ct = line.strip().split(",")
377
+ yield id_, {
378
+ "name": name,
379
+ "assigned_gender": g,
380
+ "count": int(ct),
381
+ }
382
+ elif "_inferred" in self.config.name:
383
+ with open(filepath_pair[0], encoding="utf-8") as f_b:
384
+ if "yelp" in self.config.name:
385
+ for id_, line_b in enumerate(f_b):
386
+ text_b, label_b, score_b = line_b.split("\t")
387
+ yield id_, {
388
+ "text": text_b,
389
+ "binary_label": label_b,
390
+ "binary_score": float(score_b.strip()),
391
+ }
392
+ else:
393
+ with open(filepath_pair[1], encoding="utf-8") as f_t:
394
+ for id_, (line_b, line_t) in enumerate(zip(f_b, f_t)):
395
+ text_b, label_b, score_b = line_b.split("\t")
396
+ text_t, label_t, score_t = line_t.split("\t")
397
+ yield id_, {
398
+ "text": text_b,
399
+ "binary_label": label_b,
400
+ "binary_score": float(score_b.strip()),
401
+ "ternary_label": label_t,
402
+ "ternary_score": float(score_t.strip()),
403
+ }
404
+ else:
405
+ with open(filepath, encoding="utf-8") as f:
406
+ for id_, line in enumerate(f):
407
+ example = json.loads(line.strip())
408
+ if "turker_gender" in example and example["turker_gender"] is None:
409
+ example["turker_gender"] = "no answer"
410
+ yield id_, example