Datasets:

Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
DOI:
leondz commited on
Commit
b0f1abd
1 Parent(s): 9802036

add reader, dataset, metadata, documentation

Browse files
Files changed (4) hide show
  1. README.md +191 -0
  2. dataset_infos.json +1 -0
  3. full_albanian_dataset.csv +0 -0
  4. shaj.py +127 -0
README.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert_generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - sq-AL
8
+ licenses:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text_classification
18
+ task_ids:
19
+ - hate-speech-detection
20
+ paperswithcode_id:
21
+ pretty_name: SHAJ
22
+ extra_gated_prompt: "Warning: this repository contains harmful content (abusive language, hate speech)."
23
+ ---
24
+
25
+ # Dataset Card for "shaj"
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:**
54
+ - **Repository:** [https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1](https://figshare.com/articles/dataset/SHAJ_Albanian_hate_speech_abusive_language/19333298/1)
55
+ - **Paper:** [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592)
56
+ - **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
57
+ - **Size of downloaded dataset files:** 769.21 KiB
58
+ - **Size of the generated dataset:** 1.06 MiB
59
+ - **Total amount of disk used:** 1.85 MiB
60
+
61
+ ### Dataset Summary
62
+
63
+ This is an abusive/offensive language detection dataset for Albanian. The data is formatted
64
+ following the OffensEval convention, with three tasks:
65
+
66
+ * Subtask A: Offensive (OFF) or not (NOT)
67
+ * Subtask B: Untargeted (UNT) or targeted insult (TIN)
68
+ * Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
69
+
70
+ * The subtask A field should always be filled.
71
+ * The subtask B field should only be filled if there's "offensive" (OFF) in A.
72
+ * The subtask C field should only be filled if there's "targeted" (TIN) in B.
73
+
74
+ The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
75
+
76
+ See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.
77
+
78
+ ### Supported Tasks and Leaderboards
79
+
80
+ *
81
+
82
+ ### Languages
83
+
84
+ Albanian (`bcp47:sq-AL`)
85
+
86
+ ## Dataset Structure
87
+
88
+ ### Data Instances
89
+
90
+ #### shaj
91
+
92
+ - **Size of downloaded dataset files:** 769.21 KiB
93
+ - **Size of the generated dataset:** 1.06 MiB
94
+ - **Total amount of disk used:** 1.85 MiB
95
+
96
+ An example of 'train' looks as follows.
97
+
98
+ ```
99
+ {
100
+ 'id': '0',
101
+ 'text': 'PLACEHOLDER TEXT',
102
+ 'subtask_a': 1,
103
+ 'subtask_b': 0,
104
+ 'subtask_c': 0
105
+ }
106
+ ```
107
+
108
+
109
+ ### Data Fields
110
+
111
+ - `id`: a `string` feature.
112
+ - `text`: a `string`.
113
+ - `subtask_a`: whether or not the instance is offensive; `0: OFF, 1: NOT`
114
+ - `subtask_b`: whether an offensive instance is a targeted insult; `0: TIN, 1: UNT, 2: not applicable`
115
+ - `subtask_c`: what a targeted insult is aimed at; `0: IND, 1: GRP, 2: OTH, 3: not applicable`
116
+
117
+
118
+ ### Data Splits
119
+
120
+ | name |train|
121
+ |---------|----:|
122
+ |shaj|11874 sentences|
123
+
124
+ ## Dataset Creation
125
+
126
+ ### Curation Rationale
127
+
128
+ Collecting data for enabling offensive speech detection in Albanian
129
+
130
+ ### Source Data
131
+
132
+ #### Initial Data Collection and Normalization
133
+
134
+ The text is scraped from comments on popular Albanian YouTube and Instagram accounts.
135
+ An extended discussion is given in the paper in section 3.2.
136
+
137
+ #### Who are the source language producers?
138
+
139
+ Russian speakers including from the Russian diaspora, especially Latvia
140
+
141
+ ### Annotations
142
+
143
+ #### Annotation process
144
+
145
+ The annotation scheme was taken from OffensEval 2019 and applied by two native speaker authors of the paper as well as their friends and family.
146
+
147
+ #### Who are the annotators?
148
+
149
+ Albanian native speakers, male and female, aged 20-60.
150
+
151
+ ### Personal and Sensitive Information
152
+
153
+ The data was public at the time of collection. No PII removal has been performed.
154
+
155
+ ## Considerations for Using the Data
156
+
157
+ ### Social Impact of Dataset
158
+
159
+ The data definitely contains abusive language.
160
+
161
+ ### Discussion of Biases
162
+
163
+
164
+ ### Other Known Limitations
165
+
166
+
167
+ ## Additional Information
168
+
169
+ ### Dataset Curators
170
+
171
+ The dataset is curated by the paper's authors.
172
+
173
+ ### Licensing Information
174
+
175
+ The authors distribute this data under Creative Commons attribution license, CC-BY 4.0.
176
+
177
+ ### Citation Information
178
+
179
+ ```
180
+ @article{nurce2021detecting,
181
+ title={Detecting Abusive Albanian},
182
+ author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
183
+ journal={arXiv preprint arXiv:2107.13592},
184
+ year={2021}
185
+ }
186
+ ```
187
+
188
+
189
+ ### Contributions
190
+
191
+ Author-added dataset [@leondz](https://github.com/leondz)
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"Shaj": {"description": "This is an abusive/offensive language detection dataset for Albanian. The data is formatted\nfollowing the OffensEval convention, with three tasks:\n\n* Subtask A: Offensive (OFF) or not (NOT)\n* Subtask B: Untargeted (UNT) or targeted insult (TIN)\n* Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)\n\n* The subtask A field should always be filled.\n* The subtask B field should only be filled if there's \"offensive\" (OFF) in A.\n* The subtask C field should only be filled if there's \"targeted\" (TIN) in B.\n\nThe dataset name is a backronym, also standing for \"Spoken Hate in the Albanian Jargon\"\n\nSee the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.\n", "citation": "@article{nurce2021detecting,\n title={Detecting Abusive Albanian},\n author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},\n journal={arXiv preprint arXiv:2107.13592},\n year={2021}\n}\n", "homepage": "https://arxiv.org/abs/2107.13592", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "subtask_a": {"num_classes": 2, "names": ["OFF", "NOT"], "id": null, "_type": "ClassLabel"}, "subtask_b": {"num_classes": 3, "names": ["", "TIN", "UNT"], "id": null, "_type": "ClassLabel"}, "subtask_c": {"num_classes": 4, "names": ["", "IND", "GRP", "OTH"], "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "shaj", "config_name": "Shaj", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1116165, "num_examples": 11875, "dataset_name": "shaj"}}, "download_checksums": {"full_albanian_dataset.csv": {"num_bytes": 787673, "checksum": "128cd9915b723a8202f94eda129e82c5d75fb9a1c8dbbe48d0092bb633c3bc3c"}}, "download_size": 787673, "post_processing_size": null, "dataset_size": 1116165, "size_in_bytes": 1903838}}
full_albanian_dataset.csv ADDED
The diff for this file is too large to render. See raw diff
shaj.py ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """SHAJ: An abusive language dataset for Albanian"""
18
+
19
+ import csv
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ logger = datasets.logging.get_logger(__name__)
26
+
27
+
28
+ _CITATION = """\
29
+ @article{nurce2021detecting,
30
+ title={Detecting Abusive Albanian},
31
+ author={Nurce, Erida and Keci, Jorgel and Derczynski, Leon},
32
+ journal={arXiv preprint arXiv:2107.13592},
33
+ year={2021}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ This is an abusive/offensive language detection dataset for Albanian. The data is formatted
39
+ following the OffensEval convention, with three tasks:
40
+
41
+ * Subtask A: Offensive (OFF) or not (NOT)
42
+ * Subtask B: Untargeted (UNT) or targeted insult (TIN)
43
+ * Subtask C: Type of target: individual (IND), group (GRP), or other (OTH)
44
+
45
+ * The subtask A field should always be filled.
46
+ * The subtask B field should only be filled if there's "offensive" (OFF) in A.
47
+ * The subtask C field should only be filled if there's "targeted" (TIN) in B.
48
+
49
+ The dataset name is a backronym, also standing for "Spoken Hate in the Albanian Jargon"
50
+
51
+ See the paper [https://arxiv.org/abs/2107.13592](https://arxiv.org/abs/2107.13592) for full details.
52
+ """
53
+
54
+ _URL = "full_albanian_dataset.csv"
55
+
56
+
57
+ class ShajConfig(datasets.BuilderConfig):
58
+ """BuilderConfig for Shaj"""
59
+
60
+ def __init__(self, **kwargs):
61
+ """BuilderConfig Shaj.
62
+
63
+ Args:
64
+ **kwargs: keyword arguments forwarded to super.
65
+ """
66
+ super(ShajConfig, self).__init__(**kwargs)
67
+
68
+
69
+ class Shaj(datasets.GeneratorBasedBuilder):
70
+ """Shaj dataset."""
71
+
72
+ BUILDER_CONFIGS = [
73
+ ShajConfig(name="Shaj", version=datasets.Version("1.0.0"), description="Abusive language dataset in Albanian"),
74
+ ]
75
+
76
+ def _info(self):
77
+ return datasets.DatasetInfo(
78
+ description=_DESCRIPTION,
79
+ features=datasets.Features(
80
+ {
81
+ "id": datasets.Value("string"),
82
+ "text": datasets.Value("string"),
83
+ "subtask_a": datasets.features.ClassLabel(
84
+ names=[
85
+ "OFF",
86
+ "NOT",
87
+ ]
88
+ ),
89
+ "subtask_b": datasets.features.ClassLabel(
90
+ names=[
91
+ "TIN",
92
+ "UNT",
93
+ "",
94
+ ]
95
+ ),
96
+ "subtask_c": datasets.features.ClassLabel(
97
+ names=[
98
+ "IND",
99
+ "GRP",
100
+ "OTH",
101
+ "",
102
+ ]
103
+ ),
104
+ }
105
+ ),
106
+ supervised_keys=None,
107
+ homepage="https://arxiv.org/abs/2107.13592",
108
+ citation=_CITATION,
109
+ )
110
+
111
+ def _split_generators(self, dl_manager):
112
+ """Returns SplitGenerators."""
113
+ downloaded_file = dl_manager.download_and_extract(_URL)
114
+
115
+ return [
116
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_file}),
117
+ ]
118
+
119
+ def _generate_examples(self, filepath):
120
+ logger.info("⏳ Generating examples from = %s", filepath)
121
+ with open(filepath, encoding="utf-8") as f:
122
+ shaj_reader = csv.DictReader(f, fieldnames=('text','subtask_a','subtask_b','subtask_c'), delimiter=";", quotechar='"')
123
+ guid = 0
124
+ for instance in shaj_reader:
125
+ instance["id"] = str(guid)
126
+ yield guid, instance
127
+ guid += 1