Datasets:

Languages:
English
ArXiv:
License:
shlomihod commited on
Commit
e501690
·
1 Parent(s): 3c9bfad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +189 -0
README.md CHANGED
@@ -1,3 +1,192 @@
1
  ---
2
  license: cc0-1.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc0-1.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - toxicity
9
+ pretty_name: CivilComments WILDS
10
+ size_categories:
11
+ - 100K<n<1M
12
  ---
13
+ # Dataset Card for CivilComments WILDS
14
+
15
+ ## Dataset Description
16
+
17
+ - **Homepage:** https://wilds.stanford.edu/datasets/#civilcomments
18
+ - **Repository:**
19
+ - **Paper:** https://arxiv.org/abs/2012.07421 | https://arxiv.org/abs/1903.04561
20
+ - **Leaderboard:**
21
+ - **Point of Contact:**
22
+
23
+ ### Dataset Summary
24
+
25
+ ![](https://wilds.stanford.edu/assets/images/dataset_figures/civilcomments_dataset.jpg)
26
+
27
+ Automatic review of user-generated text—e.g., detecting toxic comments—is an important tool for moderating the sheer volume of text written on the Internet. Unfortunately, prior work has shown that such toxicity classifiers pick up on biases in the training data and spuriously associate toxicity with the mention of certain demographics ([Park et al., 2018](https://arxiv.org/abs/1808.07231); [Dixon et al., 2018](https://research.google/pubs/pub46743/)). These types of spurious correlations can significantly degrade model performance on particular subpopulations ([Sagawa et al.,2020](https://arxiv.org/abs/1911.08731)).
28
+
29
+ ### Supported Tasks and Leaderboards
30
+
31
+ [More Information Needed]
32
+
33
+ ### Languages
34
+
35
+ English
36
+
37
+ ## Dataset Structure
38
+
39
+ ### Data Instances
40
+
41
+ [More Information Needed]
42
+
43
+ ### Data Fields
44
+
45
+ [More Information Needed]
46
+
47
+ ### Data Splits
48
+
49
+ [More Information Needed]
50
+
51
+ ## Dataset Creation
52
+
53
+ ### Curation Rationale
54
+
55
+ [More Information Needed]
56
+
57
+ ### Source Data
58
+
59
+ #### Initial Data Collection and Normalization
60
+
61
+ [More Information Needed]
62
+
63
+ #### Who are the source language producers?
64
+
65
+ [More Information Needed]
66
+
67
+ ### Annotations
68
+
69
+ #### Annotation process
70
+
71
+ [More Information Needed]
72
+
73
+ #### Who are the annotators?
74
+
75
+ [More Information Needed]
76
+
77
+ ### Personal and Sensitive Information
78
+
79
+ [More Information Needed]
80
+
81
+ ## Considerations for Using the Data
82
+
83
+ ### Social Impact of Dataset
84
+
85
+ [More Information Needed]
86
+
87
+ ### Discussion of Biases
88
+
89
+ [More Information Needed]
90
+
91
+ ### Other Known Limitations
92
+
93
+ [More Information Needed]
94
+
95
+ ## Additional Information
96
+
97
+ ### Dataset Curators
98
+
99
+ [More Information Needed]
100
+
101
+ ### Licensing Information
102
+
103
+ This dataset is in the public domain and is distributed under [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
104
+
105
+ ### Citation Information
106
+
107
+ @inproceedings{wilds2021,
108
+ title = {{WILDS}: A Benchmark of in-the-Wild Distribution Shifts},
109
+ author = {Pang Wei Koh and Shiori Sagawa and Henrik Marklund and Sang Michael Xie and Marvin Zhang and
110
+ Akshay Balsubramani and Weihua Hu and Michihiro Yasunaga and Richard Lanas Phillips and Irena Gao and
111
+ Tony Lee and Etienne David and Ian Stavness and Wei Guo and Berton A. Earnshaw and Imran S. Haque and
112
+ Sara Beery and Jure Leskovec and Anshul Kundaje and Emma Pierson and Sergey Levine and Chelsea Finn
113
+ and Percy Liang},
114
+ booktitle = {International Conference on Machine Learning (ICML)},
115
+ year = {2021}
116
+ }
117
+
118
+ @inproceedings{borkan2019nuanced,
119
+ title={Nuanced metrics for measuring unintended bias with real data for text classification},
120
+ author={Borkan, Daniel and Dixon, Lucas and Sorensen, Jeffrey and Thain, Nithum and Vasserman, Lucy},
121
+ booktitle={Companion Proceedings of The 2019 World Wide Web Conference},
122
+ pages={491--500},
123
+ year={2019}
124
+ }
125
+
126
+ @article{DBLP:journals/corr/abs-2211-09110,
127
+ author = {Percy Liang and
128
+ Rishi Bommasani and
129
+ Tony Lee and
130
+ Dimitris Tsipras and
131
+ Dilara Soylu and
132
+ Michihiro Yasunaga and
133
+ Yian Zhang and
134
+ Deepak Narayanan and
135
+ Yuhuai Wu and
136
+ Ananya Kumar and
137
+ Benjamin Newman and
138
+ Binhang Yuan and
139
+ Bobby Yan and
140
+ Ce Zhang and
141
+ Christian Cosgrove and
142
+ Christopher D. Manning and
143
+ Christopher R{\'{e}} and
144
+ Diana Acosta{-}Navas and
145
+ Drew A. Hudson and
146
+ Eric Zelikman and
147
+ Esin Durmus and
148
+ Faisal Ladhak and
149
+ Frieda Rong and
150
+ Hongyu Ren and
151
+ Huaxiu Yao and
152
+ Jue Wang and
153
+ Keshav Santhanam and
154
+ Laurel J. Orr and
155
+ Lucia Zheng and
156
+ Mert Y{\"{u}}ksekg{\"{o}}n{\"{u}}l and
157
+ Mirac Suzgun and
158
+ Nathan Kim and
159
+ Neel Guha and
160
+ Niladri S. Chatterji and
161
+ Omar Khattab and
162
+ Peter Henderson and
163
+ Qian Huang and
164
+ Ryan Chi and
165
+ Sang Michael Xie and
166
+ Shibani Santurkar and
167
+ Surya Ganguli and
168
+ Tatsunori Hashimoto and
169
+ Thomas Icard and
170
+ Tianyi Zhang and
171
+ Vishrav Chaudhary and
172
+ William Wang and
173
+ Xuechen Li and
174
+ Yifan Mai and
175
+ Yuhui Zhang and
176
+ Yuta Koreeda},
177
+ title = {Holistic Evaluation of Language Models},
178
+ journal = {CoRR},
179
+ volume = {abs/2211.09110},
180
+ year = {2022},
181
+ url = {https://doi.org/10.48550/arXiv.2211.09110},
182
+ doi = {10.48550/arXiv.2211.09110},
183
+ eprinttype = {arXiv},
184
+ eprint = {2211.09110},
185
+ timestamp = {Wed, 23 Nov 2022 18:03:56 +0100},
186
+ biburl = {https://dblp.org/rec/journals/corr/abs-2211-09110.bib},
187
+ bibsource = {dblp computer science bibliography, https://dblp.org}
188
+ }
189
+
190
+ ### Contributions
191
+
192
+ [More Information Needed]