Bhavish Pahwa commited on
Commit
b19f21f
1 Parent(s): 0188b82

Adding Roman Urdu Hate Speech dataset (#3972)

Browse files

* Adding Roman Urdu Hate Speech dataset

* Update Readme

* Update data structure sections in README

* Update Additional Information Section

* Update Contributions Section with some contents

* Remove typos in README

* Update Dataset Script

* Update Dummy_Data

* Apply suggestions from code review

Co-authored-by: Bhavish Pahwa <bhavishpahwa@Bhavishs-MacBook-Air.local>
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/5d0ca0f8ac43a3f9d5847ea05a6a5add588c6fc8

README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - ur
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: roman_urdu_hate_speech
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-class-classification
21
+ - text-classification-other-binary classification
22
+ ---
23
+
24
+ # Dataset Card for roman_urdu_hate_speech
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-instances)
34
+ - [Data Splits](#data-instances)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
53
+ - **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
54
+ - **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
55
+ - **Leaderboard:** [N/A]
56
+ - **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)
57
+
58
+ ### Dataset Summary
59
+
60
+ The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ - 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
65
+
66
+ ### Languages
67
+
68
+ The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
75
+
76
+ For the Coarse grained segment of the dataset the label mapping is:-
77
+ Task 1: Coarse-grained Classification Labels
78
+ 0: Abusive/Offensive
79
+ 1: Normal
80
+
81
+ Whereas for the Fine Grained segment of the dataset the label mapping is:-
82
+ Task 2: Fine-grained Classification Labels
83
+ 0: Abusive/Offensive
84
+ 1: Normal
85
+ 2: Religious Hate
86
+ 3: Sexism
87
+ 4: Profane/Untargeted
88
+
89
+ An example from Roman Urdu Hate Speech looks as follows:
90
+ ```
91
+ {
92
+ 'tweet': 'there are some yahodi daboo like imran chore zakat khore'
93
+ 'label': 0
94
+ }
95
+ ```
96
+
97
+ ### Data Fields
98
+
99
+ -tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
100
+
101
+ -label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
102
+
103
+ ### Data Splits
104
+
105
+ The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
106
+
107
+ The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
108
+
109
+ The Final split sizes are as follows:
110
+
111
+ Train Valid Test
112
+ 7209 2003 801
113
+
114
+
115
+ ## Dataset Creation
116
+
117
+ ### Curation Rationale
118
+
119
+ [More Information Needed]
120
+
121
+ ### Source Data
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ [More Information Needed]
126
+
127
+ #### Who are the source language producers?
128
+
129
+ [More Information Needed]
130
+
131
+ ### Annotations
132
+
133
+ #### Annotation process
134
+
135
+ [More Information Needed]
136
+
137
+ #### Who are the annotators?
138
+
139
+ [More Information Needed]
140
+
141
+ ### Personal and Sensitive Information
142
+
143
+ [More Information Needed]
144
+
145
+ ## Considerations for Using the Data
146
+
147
+ ### Social Impact of Dataset
148
+
149
+ [More Information Needed]
150
+
151
+ ### Discussion of Biases
152
+
153
+ [More Information Needed]
154
+
155
+ ### Other Known Limitations
156
+
157
+ [More Information Needed]
158
+
159
+ ## Additional Information
160
+
161
+ ### Dataset Curators
162
+
163
+ The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
164
+
165
+ ### Licensing Information
166
+
167
+ The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
168
+
169
+ ### Citation Information
170
+
171
+ ```bibtex
172
+ @inproceedings{rizwan2020hate,
173
+ title={Hate-speech and offensive language detection in roman Urdu},
174
+ author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
175
+ booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
176
+ pages={2512--2522},
177
+ year={2020}
178
+ }
179
+ ```
180
+
181
+ ### Contributions
182
+
183
+ Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"Coarse_Grained": {"description": " The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios. ", "citation": "@inproceedings{rizwan2020hate,\n title={Hate-speech and offensive language detection in roman Urdu},\n author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},\n pages={2512--2522},\n year={2020}\n}\n", "homepage": "https://github.com/haroonshakeel/roman_urdu_hate_speech", "license": "MIT License", "features": {"tweet": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["Abusive/Offensive", "Normal"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "text-classification", "text_column": "tweet", "label_column": "label"}], "builder_name": "roman_urdu_hate_speech", "config_name": "Coarse_Grained", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 725719, "num_examples": 7208, "dataset_name": "roman_urdu_hate_speech"}, "test": {"name": "test", "num_bytes": 218087, "num_examples": 2002, "dataset_name": "roman_urdu_hate_speech"}, "validation": {"name": "validation", "num_bytes": 79759, "num_examples": 800, "dataset_name": "roman_urdu_hate_speech"}}, "download_checksums": {"https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/task_1_train.tsv": {"num_bytes": 668097, "checksum": "6236116609a80aaf6b9c7fab8f8d236b148d4638c6255a178c0d79d7766aa3b4"}, "https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/task_1_validation.tsv": {"num_bytes": 73747, "checksum": "eff8a097b0d8974bec2158b8e0512b43537cbf796c828ca64fd3841fc8dee0cb"}, "https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/task_1_test.tsv": {"num_bytes": 186093, "checksum": "c08a90dd63e35a0eb3737c90f7bc09917b2832e56ffab8b37fff89499a419fe2"}}, "download_size": 927937, "post_processing_size": null, "dataset_size": 1023565, "size_in_bytes": 1951502}, "Fine_Grained": {"description": " The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios. ", "citation": "@inproceedings{rizwan2020hate,\n title={Hate-speech and offensive language detection in roman Urdu},\n author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},\n booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},\n pages={2512--2522},\n year={2020}\n}\n", "homepage": "https://github.com/haroonshakeel/roman_urdu_hate_speech", "license": "MIT License", "features": {"tweet": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 5, "names": ["Abusive/Offensive", "Normal", "Religious Hate", "Sexism", "Profane/Untargeted"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": [{"task": "text-classification", "text_column": "tweet", "label_column": "label"}], "builder_name": "roman_urdu_hate_speech", "config_name": "Fine_Grained", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 723670, "num_examples": 7208, "dataset_name": "roman_urdu_hate_speech"}, "test": {"name": "test", "num_bytes": 219359, "num_examples": 2002, "dataset_name": "roman_urdu_hate_speech"}, "validation": {"name": "validation", "num_bytes": 723670, "num_examples": 7208, "dataset_name": "roman_urdu_hate_speech"}}, "download_checksums": {"https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/task_2_train.tsv": {"num_bytes": 666024, "checksum": "936bbb67990f6e19e136ecde7f313b3acf266ce50824deebb06a6513dc9341be"}, "https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/task_2_validation.tsv": {"num_bytes": 666024, "checksum": "936bbb67990f6e19e136ecde7f313b3acf266ce50824deebb06a6513dc9341be"}, "https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/task_2_test.tsv": {"num_bytes": 187375, "checksum": "09e90a3a59dfaef64a4a4debd105254ecd1749312a1a6b275d7377c73ea5b8ca"}}, "download_size": 1519423, "post_processing_size": null, "dataset_size": 1666699, "size_in_bytes": 3186122}}
dummy/Coarse_Grained/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa2d1be5d3d7e2ae4fc0ab38039d775bfb74b4fb8899e35a76b2b618d33d1ab5
3
+ size 1407
dummy/Fine_Grained/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:919ec31623dcaf46bc5546b9606577eed9a6073172127ed738bf7ae429ecbe69
3
+ size 1371
roman_urdu_hate_speech.py ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """roman_urdu_hate_speech dataset"""
15
+
16
+
17
+ import csv
18
+
19
+ import datasets
20
+ from datasets.tasks import TextClassification
21
+
22
+
23
+ # Find for instance the citation on arxiv or on the dataset repo/website
24
+ _CITATION = """\
25
+ @inproceedings{rizwan2020hate,
26
+ title={Hate-speech and offensive language detection in roman Urdu},
27
+ author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
28
+ booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
29
+ pages={2512--2522},
30
+ year={2020}
31
+ }
32
+ """
33
+
34
+ # You can copy an official description
35
+ _DESCRIPTION = """\
36
+ The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a \
37
+ Roman Urdu dataset of tweets annotated by experts in the relevant language. \
38
+ The authors develop the gold-standard for two sub-tasks. \
39
+ First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). \
40
+ These labels are self-explanatory. \
41
+ The authors refer to this sub-task as coarse-grained classification. \
42
+ Second sub-task defines Hate-Offensive content with \
43
+ four labels at a granular level. \
44
+ These labels are the most relevant for the demographic of users who converse in RU and \
45
+ are defined in related literature. The authors refer to this sub-task as fine-grained classification. \
46
+ The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection \
47
+ approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios. \
48
+ """
49
+
50
+ _HOMEPAGE = "https://github.com/haroonshakeel/roman_urdu_hate_speech"
51
+
52
+ _LICENSE = "MIT License"
53
+
54
+ _Download_URL = "https://raw.githubusercontent.com/haroonshakeel/roman_urdu_hate_speech/main/"
55
+
56
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
57
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
58
+ _URLS = {
59
+ "Coarse_Grained_train": _Download_URL + "task_1_train.tsv",
60
+ "Coarse_Grained_validation": _Download_URL + "task_1_validation.tsv",
61
+ "Coarse_Grained_test": _Download_URL + "task_1_test.tsv",
62
+ "Fine_Grained_train": _Download_URL + "task_2_train.tsv",
63
+ "Fine_Grained_validation": _Download_URL + "task_2_validation.tsv",
64
+ "Fine_Grained_test": _Download_URL + "task_2_test.tsv",
65
+ }
66
+
67
+
68
+ class RomanUrduHateSpeechConfig(datasets.BuilderConfig):
69
+ """BuilderConfig for RomanUrduHateSpeech Config"""
70
+
71
+ def __init__(self, **kwargs):
72
+ """BuilderConfig for RomanUrduHateSpeech Config.
73
+ Args:
74
+ **kwargs: keyword arguments forwarded to super.
75
+ """
76
+ super(RomanUrduHateSpeechConfig, self).__init__(**kwargs)
77
+
78
+
79
+ class RomanUrduHateSpeech(datasets.GeneratorBasedBuilder):
80
+ """Roman Urdu Hate Speech dataset"""
81
+
82
+ VERSION = datasets.Version("1.1.0")
83
+
84
+ # This is an example of a dataset with multiple configurations.
85
+ # If you don't want/need to define several sub-sets in your dataset,
86
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
87
+
88
+ # If you need to make complex sub-parts in the datasets with configurable options
89
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
90
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
91
+
92
+ # You will be able to load one or the other configurations in the following list with
93
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
94
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
95
+ BUILDER_CONFIGS = [
96
+ RomanUrduHateSpeechConfig(
97
+ name="Coarse_Grained",
98
+ version=VERSION,
99
+ description="This part of my dataset covers the Coarse Grained dataset",
100
+ ),
101
+ RomanUrduHateSpeechConfig(
102
+ name="Fine_Grained", version=VERSION, description="This part of my dataset covers the Fine Grained dataset"
103
+ ),
104
+ ]
105
+
106
+ DEFAULT_CONFIG_NAME = "Coarse_Grained"
107
+ # It's not mandatory to have a default configuration. Just use one if it makes sense.
108
+
109
+ def _info(self):
110
+
111
+ if self.config.name == "Coarse_Grained":
112
+ features = datasets.Features(
113
+ {
114
+ "tweet": datasets.Value("string"),
115
+ "label": datasets.features.ClassLabel(names=["Abusive/Offensive", "Normal"]),
116
+ # These are the features of your dataset like images, labels ...
117
+ }
118
+ )
119
+ if self.config.name == "Fine_Grained":
120
+ features = datasets.Features(
121
+ {
122
+ "tweet": datasets.Value("string"),
123
+ "label": datasets.features.ClassLabel(
124
+ names=["Abusive/Offensive", "Normal", "Religious Hate", "Sexism", "Profane/Untargeted"]
125
+ ),
126
+ # These are the features of your dataset like images, labels ...
127
+ }
128
+ )
129
+ return datasets.DatasetInfo(
130
+ # This is the description that will appear on the datasets page.
131
+ description=_DESCRIPTION,
132
+ # This defines the different columns of the dataset and their types
133
+ features=features, # Here we define them above because they are different between the two configurations
134
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
135
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
136
+ # supervised_keys=("sentence", "label"),
137
+ # Homepage of the dataset for documentation
138
+ homepage=_HOMEPAGE,
139
+ # License for the dataset if available
140
+ license=_LICENSE,
141
+ # Citation for the dataset
142
+ citation=_CITATION,
143
+ task_templates=[TextClassification(text_column="tweet", label_column="label")],
144
+ )
145
+
146
+ def _split_generators(self, dl_manager):
147
+
148
+ urls_train = _URLS[self.config.name + "_train"]
149
+
150
+ urls_validate = _URLS[self.config.name + "_validation"]
151
+
152
+ urls_test = _URLS[self.config.name + "_test"]
153
+
154
+ data_dir_train = dl_manager.download_and_extract(urls_train)
155
+
156
+ data_dir_validate = dl_manager.download_and_extract(urls_validate)
157
+
158
+ data_dir_test = dl_manager.download_and_extract(urls_test)
159
+
160
+ return [
161
+ datasets.SplitGenerator(
162
+ name=datasets.Split.TRAIN,
163
+ # These kwargs will be passed to _generate_examples
164
+ gen_kwargs={
165
+ "filepath": data_dir_train,
166
+ "split": "train",
167
+ },
168
+ ),
169
+ datasets.SplitGenerator(
170
+ name=datasets.Split.TEST,
171
+ # These kwargs will be passed to _generate_examples
172
+ gen_kwargs={
173
+ "filepath": data_dir_test,
174
+ "split": "test",
175
+ },
176
+ ),
177
+ datasets.SplitGenerator(
178
+ name=datasets.Split.VALIDATION,
179
+ # These kwargs will be passed to _generate_examples
180
+ gen_kwargs={
181
+ "filepath": data_dir_validate,
182
+ "split": "dev",
183
+ },
184
+ ),
185
+ ]
186
+
187
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
188
+ def _generate_examples(self, filepath, split):
189
+
190
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
191
+ with open(filepath, encoding="utf-8") as tsv_file:
192
+ tsv_reader = csv.reader(tsv_file, quotechar="|", delimiter="\t", quoting=csv.QUOTE_ALL)
193
+ for key, row in enumerate(tsv_reader):
194
+ if key == 0:
195
+ continue
196
+ if self.config.name == "Coarse_Grained":
197
+ tweet, label = row
198
+ label = int(label)
199
+ yield key, {
200
+ "tweet": tweet,
201
+ "label": None if split == "test" else label,
202
+ }
203
+ if self.config.name == "Fine_Grained":
204
+ tweet, label = row
205
+ label = int(label)
206
+ yield key, {
207
+ "tweet": tweet,
208
+ "label": None if split == "test" else label,
209
+ }
210
+ # Yields examples as (key, example) tuples