system HF staff commited on
Commit
c0c0f3f
β€’
1 Parent(s): 41d7054

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +247 -0
README.md ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-4-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ - text-classification
19
+ task_ids:
20
+ - explanation-generation
21
+ - hate-speech-detection
22
+ ---
23
+
24
+
25
+ # Dataset Card for "social_bias_frames"
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits Sample Size](#data-splits-sample-size)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## [Dataset Description](#dataset-description)
52
+
53
+ - **Homepage:** [https://homes.cs.washington.edu/~msap/social-bias-frames/](https://homes.cs.washington.edu/~msap/social-bias-frames/)
54
+ - **Repository:** [https://homes.cs.washington.edu/~msap/social-bias-frames/](https://homes.cs.washington.edu/~msap/social-bias-frames/)
55
+ - **Paper:** [Social Bias Frames: Reasoning about Social and Power Implications of Language](https://www.aclweb.org/anthology/2020.acl-main.486.pdf)
56
+ - **Leaderboard:**
57
+ - **Point of Contact:** [Maartin Sap](mailto:msap@cs.washington.edu)
58
+ - **Size of downloaded dataset files:** 6.03 MB
59
+ - **Size of the generated dataset:** 42.41 MB
60
+ - **Total amount of disk used:** 48.45 MB
61
+
62
+ ### [Dataset Summary](#dataset-summary)
63
+
64
+ Warning: this document and dataset contain content that may be offensive or upsetting.
65
+
66
+ Social Bias Frames is a new way of representing the biases and offensiveness that are implied in language. For example, these frames are meant to distill the implication that "women (candidates) are less qualified" behind the statement "we shouldn’t lower our standards to hire more women." The Social Bias Inference Corpus (SBIC) supports large-scale learning and evaluation of social implications with over 150k structured annotations of social media posts, spanning over 34k implications about a thousand demographic groups.
67
+
68
+ ### [Supported Tasks](#supported-tasks)
69
+
70
+ This dataset supports both classification and generation. Sap et al. developed several models using the SBIC. They report an F1 score of 78.8 in predicting whether the posts in the test set were offensive, an F1 score of 78.6 in predicting whether the posts were intending to be offensive, an F1 score of 80.7 in predicting whether the posts were lewd, and an F1 score of 69.9 in predicting whether the posts were targeting a specific group.
71
+
72
+ Another of Sap et al.’s models performed better in the generation task. They report a BLUE score of 77.9, a Rouge-L score of 68.7, and a WMD score of 0.74 in generating a description of the targeted group given a post as well as a BLUE score of 52.6, a Rouge-L score of 44.9, and a WMD score of 2.79 in generating a description of the implied offensive statement given a post. See the paper for further details.
73
+
74
+ ### [Languages](#languages)
75
+
76
+ The language in SBIC is predominantly white-aligned English (78%, using a lexical dialect detector, [Blodgett et al., 2016](https://www.aclweb.org/anthology/D16-1120)). The curators find less than 10% of posts in SBIC are detected to have the AAE dialect category. The BCP-47 language tag is, presumably, en-US.
77
+
78
+ ## [Dataset Structure](#dataset-structure)
79
+
80
+ We show detailed information for up to 5 configurations of the dataset.
81
+
82
+ ### [Data Instances](#data-instances)
83
+
84
+ Each instance contains a post that may contain an offensive statement and annotated information concerning the nature of the offensive implication as well as the demographics of the annotator and origin of the post. See the [Social Bias Frames dataset viewer](https://huggingface.co/datasets/viewer/?dataset=social_bias_frames) to explore more examples.
85
+
86
+ #### default
87
+
88
+ - **Size of downloaded dataset files:** 6.03 MB
89
+ - **Size of the generated dataset:** 42.41 MB
90
+ - **Total amount of disk used:** 48.45 MB
91
+
92
+ An example of 'validation' looks as follows.
93
+ ```
94
+ {
95
+ "HITId": "3Z8UJEJOCZ74QHXOTFP0JOOGD7039J",
96
+ "WorkerId": "7162172225567546910",
97
+ "annotatorAge": "27.0",
98
+ "annotatorGender": "woman",
99
+ "annotatorMinority": "women",
100
+ "annotatorPolitics": "liberal",
101
+ "annotatorRace": "white",
102
+ "dataSource": "t/davidson",
103
+ "intentYN": "1.0",
104
+ "offensiveYN": "1.0",
105
+ "post": "Every half-breed cop in America is trying to rape white women.",
106
+ "sexPhrase": "rape white women",
107
+ "sexReason": "rape",
108
+ "sexYN": "1.0",
109
+ "speakerMinorityYN": "0.0",
110
+ "targetCategory": "race",
111
+ "targetMinority": "mixed folks",
112
+ "targetStereotype": "mixed folks are rapists.",
113
+ "whoTarget": "1.0"
114
+ }
115
+ ```
116
+
117
+ ### [Data Fields](#data-fields)
118
+
119
+ The data fields are the same among all splits.
120
+
121
+ #### default
122
+ - _whoTarget_: a string, β€˜0.0’ if the target is a group, β€˜1.0’ if the target is an individual, and blank if the post is not offensive
123
+ - _intentYN_: a string indicating if the intent behind the statement was to offend. This is a categorical variable with four possible answers, β€˜1.0’ if yes, β€˜0.66’ if probably, β€˜0.33’ if probably not, and β€˜0.0’ if no.
124
+ - _sexYN_: a string indicating whether the post contains a sexual or lewd reference. This is a categorical variable with three possible answers, β€˜1.0’ if yes, β€˜0.5’ if maybe, β€˜0.0’ if no.
125
+ - _sexReason_: a string containing a free text explanation of what is sexual if indicated so, blank otherwise
126
+ - _offensiveYN_: a string indicating if the post could be offensive to anyone. This is a categorical variable with three possible answers, β€˜1.0’ if yes, β€˜0.5’ if maybe, β€˜0.0’ if no.
127
+ - _annotatorGender_: a string indicating the gender of the MTurk worker
128
+ - _annotatorMinority_: a string indicating whether the MTurk worker identifies as a minority
129
+ - _sexPhrase_: a string indicating which part of the post references something sexual, blank otherwise
130
+ - _speakerMinorityYN_: a string indicating whether the speaker was part of the same minority group that's being targeted. This is a categorical variable with three possible answers, β€˜1.0’ if yes, β€˜0.5’ if maybe, β€˜0.0’ if no.
131
+ - _WorkerId_: a string hashed version of the MTurk workerId
132
+ - _HITId_: a string id that uniquely identifies each post
133
+ - _annotatorPolitics_: a string indicating the political leaning of the MTurk worker
134
+ - _annotatorRace_: a string indicating the race of the MTurk worker
135
+ - _annotatorAge_: a string indicating the age of the MTurk worker
136
+ - _post_: a string containing the text of the post that was annotated
137
+ - _targetMinority_: a string indicating the demographic group targeted
138
+ - _targetCategory_: a string indicating the high-level category of the demographic group(s) targeted
139
+ - _targetStereotype_: a string containing the implied statement
140
+ - _dataSource_: a string indicating the source of the post (`t/...`: means Twitter, `r/...`: means a subreddit)
141
+
142
+
143
+ ### [Data Splits Sample Size](#data-splits-sample-size)
144
+
145
+ To ensure that no post appeared in multiple splits, the curators defined a training instance as the post and its three sets of annotations. They then split the dataset into train, validation, and test sets (75%/12.5%/12.5%).
146
+
147
+ | name |train |validation|test |
148
+ |-------|-----:|---------:|----:|
149
+ |default|112900| 16738|17501|
150
+
151
+ ## [Dataset Creation](#dataset-creation)
152
+
153
+ ### [Curation Rationale](#curation-rationale)
154
+
155
+ The main aim for this dataset is to cover a wide variety of social biases that are implied in text, both subtle and overt, and make the biases representative of real world discrimination that people experience [RWJF 2017](https://web.archive.org/web/20200620105955/https://www.rwjf.org/en/library/research/2017/10/discrimination-in-america--experiences-and-views.html). The curators also included some innocuous statements, to balance out biases, offensive, or harmful content.
156
+
157
+ ### [Source Data](#source-data)
158
+
159
+ The curators included online posts from the following sources sometime between 2014-2019:
160
+ - r/darkJokes, r/meanJokes, r/offensiveJokes
161
+ - Reddit microaggressions ([Breitfeller et al., 2019](https://www.aclweb.org/anthology/D19-1176/))
162
+ - Toxic language detection Twitter corpora ([Waseem & Hovy, 2016](https://www.aclweb.org/anthology/N16-2013/); [Davidson et al., 2017](https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/viewPaper/15665); [Founa et al., 2018](https://www.aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/viewPaper/17909))
163
+ - Data scraped from hate sites (Gab, Stormfront, r/incels, r/mensrights)
164
+
165
+ #### Initial Data Collection and Normalization
166
+
167
+ The curators wanted posts to be as self-contained as possible, therefore, they applied some filtering to prevent posts from being highly context-dependent. For Twitter data, they filtered out @-replies, retweets, and links, and subsample posts such that there is a smaller correlation between AAE and offensiveness (to avoid racial bias; [Sap et al., 2019](https://www.aclweb.org/anthology/P19-1163/)). For Reddit, Gab, and Stormfront, they only selected posts that were one sentence long, don't contain links, and are between 10 and 80 words. Furthemore, for Reddit, they automatically removed posts that target automated moderation.
168
+
169
+ #### Who are the source language producers?
170
+
171
+ Due to the nature of this corpus, there is no way to know who the speakers are. But, the speakers of the Reddit, Gab, and Stormfront posts are likely white men (see [Gender by subreddit](http://bburky.com/subredditgenderratios/), [Gab users](https://en.wikipedia.org/wiki/Gab_(social_network)#cite_note-insidetheright-22), [Stormfront description](https://en.wikipedia.org/wiki/Stormfront_(website))).
172
+
173
+ ### [Annotations](#annotations)
174
+
175
+ #### Annotation process
176
+
177
+ For each post, Amazon Mechanical Turk workers indicate whether the post is offensive, whether the intent was to offend, and whether it contains lewd or sexual content. Only if annotators indicate potential offensiveness do they answer the group implication question. If the post targets or references a group or demographic, workers select or write which one(s); per selected group, they then write two to four stereotypes. Finally, workers are asked whether they think the speaker is part of one of the minority groups referenced by the post. The curators collected three annotations per post, and restricted the worker pool to the U.S. and Canada. The annotations in SBIC showed 82.4% pairwise agreement and Krippendorf’s Ξ±=0.45 on average.
178
+
179
+ Recent work has highlighted various negative side effects caused by annotating potentially abusive or harmful content (e.g., acute stress; Roberts, 2016). The curators mitigated these by limiting the number of posts that one worker could annotate in one day, paying workers above minimum wage ($7–12), and providing crisis management resources to the annotators.
180
+
181
+ #### Who are the annotators?
182
+
183
+ The annotators are Amazon Mechanical Turk workers aged 36Β±10 years old. The annotators consisted of 55% women, 42% men, and <1% non-binary and 82% identified as White, 4% Asian, 4% Hispanic, 4% Black. Information on their first language(s) and professional backgrounds was not collected.
184
+
185
+ ### Personal and Sensitive Information
186
+
187
+ Usernames are not included with the data, but the site where the post was collected is, so the user could potentially be recovered.
188
+
189
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
190
+
191
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
192
+
193
+ The curators recognize that studying Social Bias Frames necessarily requires confronting online content that may be offensive or disturbing but argue that deliberate avoidance does not eliminate such problems. By assessing social media content through the lens of Social Bias Frames, automatic flagging or AI-augmented writing interfaces may be analyzed for potentially harmful online content with detailed explanations for users or moderators to consider and verify. In addition, the collective analysis over large corpora can also be insightful for educating people on reducing unconscious biases in their language by encouraging empathy towards a targeted group.
194
+
195
+ ### [Discussion of Biases](#discussion-of-biases)
196
+
197
+ Because this is a corpus of social biases, a lot of posts contain implied or overt biases against the following groups (in decreasing order of prevalence):
198
+ - gender/sexuality
199
+ - race/ethnicity
200
+ - religion/culture
201
+ - social/political
202
+ - disability body/age
203
+ - victims
204
+
205
+ The curators warn that technology trained on this dataset could have side effects such as censorship and dialect-based racial bias.
206
+
207
+ ### [Other Known Limitations](#other-known-limitations)
208
+
209
+ Because the curators found that the dataset is predominantly written in White-aligned English, they caution researchers to consider the potential for dialect or identity-based biases in labelling ([Davidson et al.,2019](https://www.aclweb.org/anthology/W19-3504.pdf); [Sap et al., 2019a](https://www.aclweb.org/anthology/P19-1163.pdf)) before deploying technology based on SBIC.
210
+
211
+ ## [Additional Information](#additional-information)
212
+
213
+ ### [Dataset Curators](#dataset-curators)
214
+
215
+ This dataset was developed by Maarten Sap of the Paul G. Allen School of Computer Science & Engineering at the University of Washington, Saadia Gabriel, Lianhui Qin, Noah A Smith, and Yejin Choi of the Paul G. Allen School of Computer Science & Engineering and the Allen Institute for Artificial Intelligence, and Dan Jurafsky of the Linguistics & Computer Science Departments of Stanford University.
216
+
217
+ ### Licensing Information
218
+
219
+ The SBIC is licensed under the [Creative Commons 4.0 License](https://creativecommons.org/licenses/by/4.0/)
220
+
221
+ ### [Citation Information](#citation-information)
222
+
223
+ ```
224
+ @inproceedings{sap-etal-2020-social,
225
+ title = "Social Bias Frames: Reasoning about Social and Power Implications of Language",
226
+ author = "Sap, Maarten and
227
+ Gabriel, Saadia and
228
+ Qin, Lianhui and
229
+ Jurafsky, Dan and
230
+ Smith, Noah A. and
231
+ Choi, Yejin",
232
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
233
+ month = jul,
234
+ year = "2020",
235
+ address = "Online",
236
+ publisher = "Association for Computational Linguistics",
237
+ url = "https://www.aclweb.org/anthology/2020.acl-main.486",
238
+ doi = "10.18653/v1/2020.acl-main.486",
239
+ pages = "5477--5490",
240
+ abstract = "Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people{'}s judgments about others. For example, given a statement that {``}we shouldn{'}t lower our standards to hire more women,{''} most listeners will infer the implicature intended by the speaker - that {``}women (candidates) are less qualified.{''} Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover Social Bias Frames from unstructured text. We find that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias (80{\%} F1), they are not effective at spelling out more detailed explanations in terms of Social Bias Frames. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications.",
241
+ }
242
+
243
+ ```
244
+
245
+ ### Contributions
246
+
247
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@otakumesi](https://github.com/otakumesi), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset.