LouisThomson commited on
Commit
7dadf23
1 Parent(s): 1f23c7d

Made first additions to the Dataset Card

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - en
6
+ language_creators:
7
+ - found
8
+ license:
9
+ - cc0-1.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Toxic Wikipedia Comments
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - extended|other
17
+ tags:
18
+ - wikipedia
19
+ - toxicity
20
+ - toxic comments
21
+ task_categories:
22
+ - text-classification
23
+ task_ids:
24
+ - hate-speech-detection
25
+ ---
26
+ # Dataset Card for Wiki Toxic
27
+
28
+ ## Table of Contents
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+ - [Contributions](#contributions)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:**
56
+ - **Repository:**
57
+ - **Paper:**
58
+ - **Leaderboard:**
59
+ - **Point of Contact:**
60
+
61
+ ### Dataset Summary
62
+
63
+ The Wiki Toxic dataset is a modified, cleaned version of the dataset used in the [Kaggle Toxic Comment Classification challenge](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/overview) from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, `toxic` and `non-toxic`.
64
+
65
+ The Kaggle dataset was cleaned using the included `clean.py` file.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ - Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly.
70
+
71
+ ### Languages
72
+
73
+ The sole language used in the dataset is English.
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ For each data point, there is an id, the comment_text itself, and a label (0 for non-toxic, 1 for toxic).
80
+
81
+ ```
82
+ {'id': 'a123a58f610cffbc',
83
+ 'comment_text': '"This article SUCKS. It may be poorly written, poorly formatted, or full of pointless crap that no one cares about, and probably all of the above. If it can be rewritten into something less horrible, please, for the love of God, do so, before the vacuum caused by its utter lack of quality drags the rest of Wikipedia down into a bottomless pit of mediocrity."',
84
+ 'label': 1}
85
+ ```
86
+
87
+ ### Data Fields
88
+
89
+ - `id`: A unique identifier string for each comment
90
+ - `comment_text`: A string containing the text of the comment
91
+ - `label`: An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic
92
+
93
+ ### Data Splits
94
+
95
+ The Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below:
96
+
97
+ | Dataset Split | Number of data points in split |
98
+ | ----------- | ----------- |
99
+ | Train | 127,656 |
100
+ | Validation | 31,915 |
101
+ | Test | 63,978 |
102
+
103
+ ## Dataset Creation
104
+
105
+ ### Curation Rationale
106
+
107
+ [More Information Needed]
108
+
109
+ ### Source Data
110
+
111
+ #### Initial Data Collection and Normalization
112
+
113
+ [More Information Needed]
114
+
115
+ #### Who are the source language producers?
116
+
117
+ [More Information Needed]
118
+
119
+ ### Annotations
120
+
121
+ #### Annotation process
122
+
123
+ [More Information Needed]
124
+
125
+ #### Who are the annotators?
126
+
127
+ [More Information Needed]
128
+
129
+ ### Personal and Sensitive Information
130
+
131
+ [More Information Needed]
132
+
133
+ ## Considerations for Using the Data
134
+
135
+ ### Social Impact of Dataset
136
+
137
+ [More Information Needed]
138
+
139
+ ### Discussion of Biases
140
+
141
+ [More Information Needed]
142
+
143
+ ### Other Known Limitations
144
+
145
+ [More Information Needed]
146
+
147
+ ## Additional Information
148
+
149
+ ### Dataset Curators
150
+
151
+ [More Information Needed]
152
+
153
+ ### Licensing Information
154
+
155
+ [More Information Needed]
156
+
157
+ ### Citation Information
158
+
159
+ [More Information Needed]
160
+
161
+ ### Contributions
162
+
163
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.