rebeccaqian commited on
Commit
3401bd2
1 Parent(s): 1da4337

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +179 -1
README.md CHANGED
@@ -1,3 +1,181 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ language:
6
+ - en
7
+ language_creators:
8
+ - crowdsourced
9
+ - expert-generated
10
+ license:
11
+ - mit
12
+ multilinguality:
13
+ - monolingual
14
+ paperswithcode_id: winobias
15
+ pretty_name: panda
16
+ size_categories:
17
+ - 100K<n<1M
18
+ source_datasets:
19
+ - original
20
+ tags:
21
+ - fairness
22
+ - nlp
23
+ - demographic
24
+ - diverse
25
+ - gender
26
+ - non-binary
27
+ - race
28
+ - age
29
+ task_categories:
30
+ - token-classification
31
+ task_ids: []
32
  ---
33
+
34
+ # Dataset Card for PANDA
35
+
36
+ ## Table of Contents
37
+ - [Table of Contents](#table-of-contents)
38
+ - [Dataset Description](#dataset-description)
39
+ - [Dataset Summary](#dataset-summary)
40
+ - [Languages](#languages)
41
+ - [Dataset Structure](#dataset-structure)
42
+ - [Data Instances](#data-instances)
43
+ - [Data Fields](#data-fields)
44
+ - [Data Splits](#data-splits)
45
+ - [Dataset Creation](#dataset-creation)
46
+ - [Curation Rationale](#curation-rationale)
47
+ - [Source Data](#source-data)
48
+ - [Annotations](#annotations)
49
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
50
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
51
+ - [Social Impact of Dataset](#social-impact-of-dataset)
52
+ - [Discussion of Biases](#discussion-of-biases)
53
+ - [Other Known Limitations](#other-known-limitations)
54
+ - [Additional Information](#additional-information)
55
+ - [Dataset Curators](#dataset-curators)
56
+ - [Licensing Information](#licensing-information)
57
+ - [Citation Information](#citation-information)
58
+ - [Contributions](#contributions)
59
+
60
+ ## Dataset Description
61
+
62
+ - **Repository:** https://github.com/facebookresearch/ResponsibleNLP/
63
+ - **Paper:** https://arxiv.org/abs/2205.12586
64
+ - **Point of Contact:** rebeccaqian@meta.com, ccross@meta.com, douwe@huggingface.co, adinawilliams@meta.com
65
+
66
+ ### Dataset Summary
67
+
68
+ PANDA (Perturbation Augmentation NLP DAtaset) consists of approximately 100K pairs of crowdsourced human-perturbed text snippets (original, perturbed). Annotators were given selected terms and target demographic attributes, and instructed to rewrite text snippets along three demographic axes: gender, race and age, while preserving semantic meaning. Text snippets were sourced from a range of text corpora (BookCorpus, Wikipedia, ANLI, MNLI, SST, SQuAD). PANDA can be used for training a learned perturber that can rewrite text with control. PANDA can also be used to evaluate the demographic robustness of language models.
69
+
70
+ ### Languages
71
+
72
+ English
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ - Size of training data: 198.6 MB
79
+ - Size of validation data: 22.2 MB
80
+
81
+ Examples of data instances:
82
+ ```
83
+ {
84
+ "original": "the moment the girl mentions the subject she will be yours .",
85
+ "selected_word": "girl",
86
+ "target_attribute": "man",
87
+ "perturbed": "the moment the boy mentions the subject he will be yours.\n\n"
88
+ }
89
+ {
90
+ "original": "are like magic tricks, says the New York Times ' Michael Kimmelman. <SEP> Michael Kimmelman has never likened anything to a magic trick.",
91
+ "selected_word": "Michael",
92
+ "target_attribute": "woman",
93
+ "perturbed": "are like magic tricks, says the New York Times' Michelle Kimmelman. <SEP> Michelle Kimmelman has never likened anything to a magic trick."
94
+ }
95
+ {
96
+ "original": "lilly ann looked at him asking herself how he cold not know .",
97
+ "selected_word": "he",
98
+ "target_attribute": "non-binary",
99
+ "perturbed": "Lilly Ann looked at them, asking herself how they could not know."
100
+ }
101
+ ```
102
+
103
+ Examples with <SEP> tokens are the result of concatenation of text fields in source datasets, such as the premise and hypothesis of NLI datasets.
104
+
105
+ ### Data Fields
106
+
107
+ - `original`: Source (unperturbed) text snippet, sampled from a variety of English text corpora.
108
+ - `selected_word`: Demographic term that needs to be perturbed.
109
+ - `target_attribute`: Target demographic category.
110
+ - `perturbed`: Perturbed text snippet, which is the source text rewritten to alter the selected word along the specified target demographic attribute. For example, if the selected word is "Lily" and target is "man", all references to "Lily" (eg. pronouns) in the source text are altered to refer to a man. Note that some examples may be unchanged, either due to the lack of demographic information, or ambiguity of the task; given the subjective nature of identifying demographic terms and attributes, we allow some room for interpretation provided the rewrite does not perpetuate harmful social biases.
111
+
112
+ ### Data Splits
113
+
114
+ - `train`: 94966
115
+ - `valid`: 10551
116
+
117
+ ## Dataset Creation
118
+
119
+ ### Curation Rationale
120
+
121
+ We constructed PANDA to create and release the first large scale dataset of demographic text perturbations. This enables the training of the first neural perturber model, which outperforms heuristic approaches.
122
+
123
+ ### Source Data
124
+
125
+ #### Initial Data Collection and Normalization
126
+
127
+ We employed 524 crowdworkers to create PANDA examples over the span of several months. Annotators were tasked with rewriting text snippets sourced from popular English text corpora. For more information on the task UI and methodology, see our paper *Perturbation Augmentation for Fairer NLP*.
128
+
129
+ ### Annotations
130
+
131
+ #### Annotation process
132
+
133
+ PANDA was collected in a 3 stage annotation process:
134
+ 1. Span identification: Annotators select demographic terms in source text samples.
135
+ 2. Attribute identification: Identified demographic terms are annotated for gender/race/age attributes, such as "man", "Asian", "old" etc.
136
+ 3. Rewrite text: Annotators rewrite text by modifying the selected entity to reflect the target demographic attribute. Annotators are encouraged to create minimal edits, eg. "George" -> "Georgina".
137
+
138
+ The annotation process is explained in more detail in our paper.
139
+
140
+ #### Who are the annotators?
141
+
142
+ PANDA was annotated by English speaking Amazon Mechanical Turk workers. We included a voluntary demographic survey along with annotation tasks that did not contribute to pay. For a breakdown of annotators' demographic identities, see our paper.
143
+
144
+ ### Personal and Sensitive Information
145
+
146
+ PANDA does not contain identifying information about annotators.
147
+
148
+ ## Considerations for Using the Data
149
+
150
+ ### Social Impact of Dataset
151
+
152
+ By releasing the first large scale dataset of demographic text rewrites, we hope to enable exciting future work in fairness in NLP toward more scalable, automated approaches to reducing biases in datasets and language models.
153
+
154
+ Furthermore, PANDA aims to be diverse in text domain and demographic representation. PANDA includes a large proportion of non-binary gender annotations, which are underrepresented in existing text corpora and prior fairness datasets. Text examples vary in length, with examples spanning single sentences and long Wikipedia passages, and are sourced from a variety of text corpora that can be used to train a domain agnostic perturber.
155
+
156
+
157
+ ### Discussion of Biases
158
+
159
+ For this work, we sourced our annotated data from a range of sources to ensure: (i) permissive data licensing, (ii) that our perturber works well on downstream applications such as NLU classification tasks, and (iii) that our perturber can handle data from multiple domains to be maximally useful. However, we acknowledge that there may be other existing biases in PANDA as a result of our data sourcing choices. For example, it is possible that data sources like BookWiki primarily contain topics of interest to people with a certain amount of influence and educational access, people from the so-called “Western world”, etc. Other topics that might be interesting and relevant to others may be missing or only present in limited quantities. The present approach can only weaken associations inherited from the data sources we use, but in future work, we would love to explore the efficacy of our approach on text from other sources that contain a wider range of topics and text domain differences.
160
+
161
+ ### Other Known Limitations
162
+
163
+ Our augmentation process can sometimes create nonexistent versions of real people, such as discussing an English King Victor (not a historical figure), as opposed to a Queen Victoria (a historical figure). We embrace the counterfactuality of many of our perturbations, but the lack of guaranteed factuality means that our approach may not be well-suited to all NLP tasks. For example, it might not be suitable for augmenting misinformation detection datasets, because peoples’ names, genders, and other demographic information should not be changed.
164
+
165
+ ## Additional Information
166
+
167
+ ### Dataset Curators
168
+
169
+ Rebecca Qian, Candace Ross, Jude Fernandes, Douwe Kiela and Adina Williams.
170
+
171
+ ### Licensing Information
172
+
173
+ PANDA is released under the MIT license.
174
+
175
+ ### Citation Information
176
+
177
+ https://arxiv.org/abs/2205.12586
178
+
179
+ ### Contributions
180
+
181
+ Thanks to [@github-username](https://github.com/Rebecca-Qian) for adding this dataset.