system HF staff commited on
Commit
700e2fa
0 Parent(s):

Update files from the datasets library (from 1.13.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.13.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc0-1-0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Jigsaw Unintended Bias in Toxicity Classification
13
+ size_categories:
14
+ - 1M<n<10M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-scoring
19
+ task_ids:
20
+ - text-scoring-other-toxicity-prediction
21
+ ---
22
+
23
+ # Dataset Card for Jigsaw Unintended Bias in Toxicity Classification
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification **
53
+ - **Repository: N/A **
54
+ - **Paper: N/A **
55
+ - **Leaderboard: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard **
56
+ - **Point of Contact: N/A **
57
+
58
+ ### Dataset Summary
59
+
60
+ The Jigsaw Unintended Bias in Toxicity Classification dataset comes from the eponymous Kaggle competition.
61
+
62
+ Please see the original [data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data)
63
+ description for more information.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ The main target for this dataset is toxicity prediction. Several toxicity subtypes are also available, so the dataset
68
+ can be used for multi-attribute prediction.
69
+
70
+ See the original [leaderboard](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/leaderboard)
71
+ for reference.
72
+
73
+ ### Languages
74
+
75
+ English
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ A data point consists of an id, a comment, the main target, the other toxicity subtypes as well as identity attributes.
82
+
83
+ For instance, here's the first train example.
84
+ ```
85
+ {
86
+ "article_id": 2006,
87
+ "asian": NaN,
88
+ "atheist": NaN,
89
+ "bisexual": NaN,
90
+ "black": NaN,
91
+ "buddhist": NaN,
92
+ "christian": NaN,
93
+ "comment_text": "This is so cool. It's like, 'would you want your mother to read this??' Really great idea, well done!",
94
+ "created_date": "2015-09-29 10:50:41.987077+00",
95
+ "disagree": 0,
96
+ "female": NaN,
97
+ "funny": 0,
98
+ "heterosexual": NaN,
99
+ "hindu": NaN,
100
+ "homosexual_gay_or_lesbian": NaN,
101
+ "identity_annotator_count": 0,
102
+ "identity_attack": 0.0,
103
+ "insult": 0.0,
104
+ "intellectual_or_learning_disability": NaN,
105
+ "jewish": NaN,
106
+ "latino": NaN,
107
+ "likes": 0,
108
+ "male": NaN,
109
+ "muslim": NaN,
110
+ "obscene": 0.0,
111
+ "other_disability": NaN,
112
+ "other_gender": NaN,
113
+ "other_race_or_ethnicity": NaN,
114
+ "other_religion": NaN,
115
+ "other_sexual_orientation": NaN,
116
+ "parent_id": NaN,
117
+ "physical_disability": NaN,
118
+ "psychiatric_or_mental_illness": NaN,
119
+ "publication_id": 2,
120
+ "rating": 0,
121
+ "sad": 0,
122
+ "severe_toxicity": 0.0,
123
+ "sexual_explicit": 0.0,
124
+ "target": 0.0,
125
+ "threat": 0.0,
126
+ "toxicity_annotator_count": 4,
127
+ "transgender": NaN,
128
+ "white": NaN,
129
+ "wow": 0
130
+ }
131
+ ```
132
+
133
+ ### Data Fields
134
+
135
+ - `id`: id of the comment
136
+ - `target`: value between 0(non-toxic) and 1(toxic) classifying the comment
137
+ - `comment_text`: the text of the comment
138
+ - `severe_toxicity`: value between 0(non-severe_toxic) and 1(severe_toxic) classifying the comment
139
+ - `obscene`: value between 0(non-obscene) and 1(obscene) classifying the comment
140
+ - `identity_attack`: value between 0(non-identity_hate) or 1(identity_hate) classifying the comment
141
+ - `insult`: value between 0(non-insult) or 1(insult) classifying the comment
142
+ - `threat`: value between 0(non-threat) and 1(threat) classifying the comment
143
+ - For a subset of rows, columns containing whether the comment mentions the entities (they may contain NaNs):
144
+ - `male`
145
+ - `female`
146
+ - `transgender`
147
+ - `other_gender`
148
+ - `heterosexual`
149
+ - `homosexual_gay_or_lesbian`
150
+ - `bisexual`
151
+ - `other_sexual_orientation`
152
+ - `christian`
153
+ - `jewish`
154
+ - `muslim`
155
+ - `hindu`
156
+ - `buddhist`
157
+ - `atheist`
158
+ - `other_religion`
159
+ - `black`
160
+ - `white`
161
+ - `asian`
162
+ - `latino`
163
+ - `other_race_or_ethnicity`
164
+ - `physical_disability`
165
+ - `intellectual_or_learning_disability`
166
+ - `psychiatric_or_mental_illness`
167
+ - `other_disability`
168
+ - Other metadata related to the source of the comment, such as creation date, publication id, number of likes,
169
+ number of annotators, etc:
170
+ - `created_date`
171
+ - `publication_id`
172
+ - `parent_id`
173
+ - `article_id`
174
+ - `rating`
175
+ - `funny`
176
+ - `wow`
177
+ - `sad`
178
+ - `likes`
179
+ - `disagree`
180
+ - `sexual_explicit`
181
+ - `identity_annotator_count`
182
+ - `toxicity_annotator_count`
183
+
184
+ ### Data Splits
185
+
186
+ There are four splits:
187
+ - train: The train dataset as released during the competition. Contains labels and identity information for a
188
+ subset of rows.
189
+ - test: The train dataset as released during the competition. Does not contain labels nor identity information.
190
+ - test_private_expanded: The private leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
191
+ - test_public_expanded: The public leaderboard test set, including toxicity labels and subgroups. The competition target was a binarized version of the toxicity column, which can be easily reconstructed using a >=0.5 threshold.
192
+
193
+ ## Dataset Creation
194
+
195
+ ### Curation Rationale
196
+
197
+ The dataset was created to help in efforts to identify and curb instances of toxicity online.
198
+
199
+ ### Source Data
200
+
201
+ #### Initial Data Collection and Normalization
202
+
203
+ [More Information Needed]
204
+
205
+ #### Who are the source language producers?
206
+
207
+ [More Information Needed]
208
+
209
+ ### Annotations
210
+
211
+ #### Annotation process
212
+
213
+ [More Information Needed]
214
+
215
+ #### Who are the annotators?
216
+
217
+ [More Information Needed]
218
+
219
+ ### Personal and Sensitive Information
220
+
221
+ [More Information Needed]
222
+
223
+ ## Considerations for Using the Data
224
+
225
+ ### Social Impact of Dataset
226
+
227
+ [More Information Needed]
228
+
229
+ ### Discussion of Biases
230
+
231
+ [More Information Needed]
232
+
233
+ ### Other Known Limitations
234
+
235
+ [More Information Needed]
236
+
237
+ ## Additional Information
238
+
239
+ ### Dataset Curators
240
+
241
+ [More Information Needed]
242
+
243
+ ### Licensing Information
244
+
245
+ This dataset is released under CC0, as is the underlying comment text.
246
+
247
+ ### Citation Information
248
+
249
+ No citation is available for this dataset, though you may link to the [kaggle](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification) competition
250
+
251
+ ### Contributions
252
+
253
+ Thanks to [@iwontbecreative](https://github.com/iwontbecreative) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "A collection of comments from the defunct Civil Comments platform that have been annotated for their toxicity.\n", "citation": "", "homepage": "https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/", "license": "CC0 (both the dataset and underlying text)", "features": {"target": {"dtype": "float32", "id": null, "_type": "Value"}, "comment_text": {"dtype": "string", "id": null, "_type": "Value"}, "severe_toxicity": {"dtype": "float32", "id": null, "_type": "Value"}, "obscene": {"dtype": "float32", "id": null, "_type": "Value"}, "identity_attack": {"dtype": "float32", "id": null, "_type": "Value"}, "insult": {"dtype": "float32", "id": null, "_type": "Value"}, "threat": {"dtype": "float32", "id": null, "_type": "Value"}, "asian": {"dtype": "float32", "id": null, "_type": "Value"}, "atheist": {"dtype": "float32", "id": null, "_type": "Value"}, "bisexual": {"dtype": "float32", "id": null, "_type": "Value"}, "black": {"dtype": "float32", "id": null, "_type": "Value"}, "buddhist": {"dtype": "float32", "id": null, "_type": "Value"}, "christian": {"dtype": "float32", "id": null, "_type": "Value"}, "female": {"dtype": "float32", "id": null, "_type": "Value"}, "heterosexual": {"dtype": "float32", "id": null, "_type": "Value"}, "hindu": {"dtype": "float32", "id": null, "_type": "Value"}, "homosexual_gay_or_lesbian": {"dtype": "float32", "id": null, "_type": "Value"}, "intellectual_or_learning_disability": {"dtype": "float32", "id": null, "_type": "Value"}, "jewish": {"dtype": "float32", "id": null, "_type": "Value"}, "latino": {"dtype": "float32", "id": null, "_type": "Value"}, "male": {"dtype": "float32", "id": null, "_type": "Value"}, "muslim": {"dtype": "float32", "id": null, "_type": "Value"}, "other_disability": {"dtype": "float32", "id": null, "_type": "Value"}, "other_gender": {"dtype": "float32", "id": null, "_type": "Value"}, "other_race_or_ethnicity": {"dtype": "float32", "id": null, "_type": "Value"}, "other_religion": {"dtype": "float32", "id": null, "_type": "Value"}, "other_sexual_orientation": {"dtype": "float32", "id": null, "_type": "Value"}, "physical_disability": {"dtype": "float32", "id": null, "_type": "Value"}, "psychiatric_or_mental_illness": {"dtype": "float32", "id": null, "_type": "Value"}, "transgender": {"dtype": "float32", "id": null, "_type": "Value"}, "white": {"dtype": "float32", "id": null, "_type": "Value"}, "created_date": {"dtype": "string", "id": null, "_type": "Value"}, "publication_id": {"dtype": "int32", "id": null, "_type": "Value"}, "parent_id": {"dtype": "float32", "id": null, "_type": "Value"}, "article_id": {"dtype": "int32", "id": null, "_type": "Value"}, "rating": {"num_classes": 2, "names": ["rejected", "approved"], "names_file": null, "id": null, "_type": "ClassLabel"}, "funny": {"dtype": "int32", "id": null, "_type": "Value"}, "wow": {"dtype": "int32", "id": null, "_type": "Value"}, "sad": {"dtype": "int32", "id": null, "_type": "Value"}, "likes": {"dtype": "int32", "id": null, "_type": "Value"}, "disagree": {"dtype": "int32", "id": null, "_type": "Value"}, "sexual_explicit": {"dtype": "float32", "id": null, "_type": "Value"}, "identity_annotator_count": {"dtype": "int32", "id": null, "_type": "Value"}, "toxicity_annotator_count": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "jigsaw_unintended_bias", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 914264058, "num_examples": 1804874, "dataset_name": "jigsaw_unintended_bias"}, "test_private_leaderboard": {"name": "test_private_leaderboard", "num_bytes": 49188921, "num_examples": 97320, "dataset_name": "jigsaw_unintended_bias"}, "test_public_leaderboard": {"name": "test_public_leaderboard", "num_bytes": 49442360, "num_examples": 97320, "dataset_name": "jigsaw_unintended_bias"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 1012895339, "size_in_bytes": 1012895339}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0e82dd9c28b8ad755a09e4eaa8d9446dac6441848e9c1291ee33ca63949f851
3
+ size 3597
jigsaw_unintended_bias.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Jigsaw Unintended Bias in Toxicity Classification dataset"""
16
+
17
+
18
+ import os
19
+
20
+ import pandas as pd
21
+
22
+ import datasets
23
+
24
+
25
+ _DESCRIPTION = """\
26
+ A collection of comments from the defunct Civil Comments platform that have been annotated for their toxicity.
27
+ """
28
+
29
+ _HOMEPAGE = "https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/"
30
+
31
+ _LICENSE = "CC0 (both the dataset and underlying text)"
32
+
33
+
34
+ class JigsawUnintendedBias(datasets.GeneratorBasedBuilder):
35
+ """A collection of comments from the defunct Civil Comments platform that have been annotated for their toxicity."""
36
+
37
+ VERSION = datasets.Version("1.1.0")
38
+
39
+ @property
40
+ def manual_download_instructions(self):
41
+ return """\
42
+ To use jigsaw_unintended_bias you have to download it manually from Kaggle: https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data
43
+ You can manually download the data from it's homepage or use the Kaggle CLI tool (follow the instructions here: https://www.kaggle.com/docs/api)
44
+ Please extract all files in one folder and then load the dataset with:
45
+ `datasets.load_dataset('jigsaw_unintended_bias', data_dir='/path/to/extracted/data/')`"""
46
+
47
+ def _info(self):
48
+
49
+ return datasets.DatasetInfo(
50
+ # This is the description that will appear on the datasets page.
51
+ description=_DESCRIPTION,
52
+ # This defines the different columns of the dataset and their types
53
+ features=datasets.Features(
54
+ {
55
+ "target": datasets.Value("float32"),
56
+ "comment_text": datasets.Value("string"),
57
+ "severe_toxicity": datasets.Value("float32"),
58
+ "obscene": datasets.Value("float32"),
59
+ "identity_attack": datasets.Value("float32"),
60
+ "insult": datasets.Value("float32"),
61
+ "threat": datasets.Value("float32"),
62
+ "asian": datasets.Value("float32"),
63
+ "atheist": datasets.Value("float32"),
64
+ "bisexual": datasets.Value("float32"),
65
+ "black": datasets.Value("float32"),
66
+ "buddhist": datasets.Value("float32"),
67
+ "christian": datasets.Value("float32"),
68
+ "female": datasets.Value("float32"),
69
+ "heterosexual": datasets.Value("float32"),
70
+ "hindu": datasets.Value("float32"),
71
+ "homosexual_gay_or_lesbian": datasets.Value("float32"),
72
+ "intellectual_or_learning_disability": datasets.Value("float32"),
73
+ "jewish": datasets.Value("float32"),
74
+ "latino": datasets.Value("float32"),
75
+ "male": datasets.Value("float32"),
76
+ "muslim": datasets.Value("float32"),
77
+ "other_disability": datasets.Value("float32"),
78
+ "other_gender": datasets.Value("float32"),
79
+ "other_race_or_ethnicity": datasets.Value("float32"),
80
+ "other_religion": datasets.Value("float32"),
81
+ "other_sexual_orientation": datasets.Value("float32"),
82
+ "physical_disability": datasets.Value("float32"),
83
+ "psychiatric_or_mental_illness": datasets.Value("float32"),
84
+ "transgender": datasets.Value("float32"),
85
+ "white": datasets.Value("float32"),
86
+ "created_date": datasets.Value("string"),
87
+ "publication_id": datasets.Value("int32"),
88
+ "parent_id": datasets.Value("float"),
89
+ "article_id": datasets.Value("int32"),
90
+ "rating": datasets.ClassLabel(names=["rejected", "approved"]),
91
+ "funny": datasets.Value("int32"),
92
+ "wow": datasets.Value("int32"),
93
+ "sad": datasets.Value("int32"),
94
+ "likes": datasets.Value("int32"),
95
+ "disagree": datasets.Value("int32"),
96
+ "sexual_explicit": datasets.Value("float"),
97
+ "identity_annotator_count": datasets.Value("int32"),
98
+ "toxicity_annotator_count": datasets.Value("int32"),
99
+ }
100
+ ),
101
+ # If there's a common (input, target) tuple from the features,
102
+ # specify them here. They'll be used if as_supervised=True in
103
+ # builder.as_dataset.
104
+ supervised_keys=None,
105
+ # Homepage of the dataset for documentation
106
+ homepage=_HOMEPAGE,
107
+ # License for the dataset if available
108
+ license=_LICENSE,
109
+ )
110
+
111
+ def _split_generators(self, dl_manager):
112
+ """Returns SplitGenerators."""
113
+ # This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
114
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
115
+
116
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
117
+
118
+ if not os.path.exists(data_dir):
119
+ raise FileNotFoundError(
120
+ "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('jigsaw_unintended_bias', data_dir=...)`. Manual download instructions: {}".format(
121
+ data_dir, self.manual_download_instructions
122
+ )
123
+ )
124
+
125
+ return [
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TRAIN,
128
+ # These kwargs will be passed to _generate_examples
129
+ gen_kwargs={"path": os.path.join(data_dir, "train.csv"), "split": "train"},
130
+ ),
131
+ datasets.SplitGenerator(
132
+ name=datasets.Split("test_private_leaderboard"),
133
+ # These kwargs will be passed to _generate_examples
134
+ gen_kwargs={"path": os.path.join(data_dir, "test_private_expanded.csv"), "split": "test"},
135
+ ),
136
+ datasets.SplitGenerator(
137
+ name=datasets.Split("test_public_leaderboard"),
138
+ # These kwargs will be passed to _generate_examples
139
+ gen_kwargs={"path": os.path.join(data_dir, "test_public_expanded.csv"), "split": "test"},
140
+ ),
141
+ ]
142
+
143
+ def _generate_examples(self, split: str = "train", path: str = None):
144
+ """Yields examples."""
145
+ # This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
146
+ # It is in charge of opening the given file and yielding (key, example) tuples from the dataset
147
+ # The key is not important, it's more here for legacy reason (legacy from tfds)
148
+
149
+ # Avoid loading everything into memory at once
150
+ all_data = pd.read_csv(path, chunksize=50000)
151
+
152
+ for data in all_data:
153
+ if split != "train":
154
+ data = data.rename(columns={"toxicity": "target"})
155
+
156
+ for _, row in data.iterrows():
157
+ example = row.to_dict()
158
+ ex_id = example.pop("id")
159
+ yield (ex_id, example)