system HF staff commited on
Commit
ae7fcdb
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +170 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. kor_hate.py +98 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - expert-generated
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - ko
9
+ licenses:
10
+ - cc-by-sa-4-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-label-classification
21
+ ---
22
+
23
+ # Dataset Card for [Dataset Name]
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage: [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)**
51
+ - **Repository: [Korean HateSpeech Dataset](https://github.com/kocohub/korean-hate-speech)**
52
+ - **Paper: [BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection](https://arxiv.org/abs/2005.12503)**
53
+ - **Point of Contact: [Steven Liu](stevhliu@gmail.com)**
54
+
55
+ ### Dataset Summary
56
+
57
+ The Korean HateSpeech Dataset is a dataset of 8367 human-labeled entertainment news comments from a popular Korean news aggregation platform. Each comment was evaluated for either social bias (labels: `gender`, `others` `none`), hate speech (labels: `hate`, `offensive`, `none`) or gender bias (labels: `True`, `False`). The dataset was created to support the identification of toxic comments on online platforms where users can remain anonymous.
58
+
59
+ ### Supported Tasks and Leaderboards
60
+
61
+ * `multi-label classification`: The dataset can be used to train a model for hate speech detection. A BERT model can be presented with a Korean entertainment news comment and be asked to label whether it contains social bias, gender bias and hate speech. Users can participate in a Kaggle leaderboard [here](https://www.kaggle.com/c/korean-hate-speech-detection/overview).
62
+
63
+ ### Languages
64
+
65
+ The text in the dataset is in Korean and the associated is BCP-47 code is `ko-KR`.
66
+
67
+ ## Dataset Structure
68
+
69
+ ### Data Instances
70
+
71
+ An example data instance contains a `comments` containing the text of the news comment and then labels for each of the following fields: `contain_gender_bias`, `bias` and `hate`.
72
+
73
+ ```python
74
+ {'comments':'설마 ㅈ 현정 작가 아니지??'
75
+ 'contain_gender_bias': 'True',
76
+ 'bias': 'gender',
77
+ 'hate': 'hate'
78
+ }
79
+ ```
80
+
81
+ ### Data Fields
82
+
83
+ * `comments`: text from the Korean news comment
84
+ * `contain_gender_bias`: a binary `True`/`False` label for the presence of gender bias
85
+ * `bias`: determines the type of social bias, which can be:
86
+ * `gender`: if the text includes bias for gender role, sexual orientation, sexual identity, and any thoughts on gender-related acts
87
+ * `others`: other kinds of factors that are considered not gender-related but social bias, including race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience
88
+ * `none`: a comment that does not incorporate the bias
89
+ * `hate`: determines how aggressive the comment is, which can be:
90
+ * `hate`: if the text is defined as an expression that display aggressive stances towards individuals/groups with certain characteristics (gender role, sexual orientation, sexual identity, any thoughts on gender-related acts, race, background, nationality, ethnic group, political stance, skin color, religion, handicaps, age, appearance, richness, occupations, the absence of military service experience, etc.)
91
+ * `offensive`: if the text contains rude or aggressive contents, can emit sarcasm through rhetorical question or irony, encompass an unethical expression or conveys unidentified rumors
92
+ * `none`: a comment that does not incorporate hate
93
+
94
+ ### Data Splits
95
+
96
+ The data is split into a training and development (test) set. It contains 8371 annotated comments that are split into 7896 comments in the training set and 471 comments in the test set.
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Curation Rationale
101
+
102
+ The dataset was created to provide the first human-labeled Korean corpus for toxic speech detection from a Korean online entertainment news aggregator. Recently, two young Korean celebrities suffered from a series of tragic incidents that led to two major Korean web portals to close the comments section on their platform. However, this only serves as a temporary solution, and the fundamental issue has not been solved yet. This dataset hopes to improve Korean hate speech detection.
103
+
104
+ ### Source Data
105
+
106
+ #### Initial Data Collection and Normalization
107
+
108
+ A total of 10.4 million comments were collected from an online Korean entertainment news aggregator between Jan. 1, 2018 and Feb. 29, 2020. 1,580 articles were drawn using stratified sampling and the top 20 comments were extracted ranked in order of their Wilson score on the downvote for each article. Duplicate comments, single token comments and comments with more than 100 characters were removed (because they could convey various opinions). From here, 10K comments were randomly chosen for annotation.
109
+
110
+ #### Who are the source language producers?
111
+
112
+ The language producers are users of the Korean online news platform between 2018 and 2020.
113
+
114
+ ### Annotations
115
+
116
+ #### Annotation process
117
+
118
+ Each comment was assigned to three random annotators to assign a majority decision. For more ambiguous comments, annotators were allowed to skip the comment. See Appendix A in the [paper](https://arxiv.org/pdf/2005.12503.pdf) for more detailed guidelines.
119
+
120
+ #### Who are the annotators?
121
+
122
+ Annotation was performed by 32 annotators, consisting of 29 annotators from the crowdsourcing platform DeepNatural AI and three NLP researchers.
123
+
124
+ ### Personal and Sensitive Information
125
+
126
+ [N/A]
127
+
128
+ ## Considerations for Using the Data
129
+
130
+ ### Social Impact of Dataset
131
+
132
+ The purpose of this dataset is to tackle the social issue of users creating toxic comments on online platforms. This dataset aims to improve detection of toxic comments online.
133
+
134
+ ### Discussion of Biases
135
+
136
+ [More Information Needed]
137
+
138
+ ### Other Known Limitations
139
+
140
+ [More Information Needed]
141
+
142
+ ## Additional Information
143
+
144
+ ### Dataset Curators
145
+
146
+ This dataset is curated by Jihyung Moon, Won Ik Cho and Junbum Lee.
147
+
148
+ ### Licensing Information
149
+
150
+ [N/A]
151
+
152
+ ### Citation Information
153
+
154
+ ```
155
+ @inproceedings
156
+ {moon-et-al-2020-beep
157
+ title = "{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection",
158
+ author = "Moon, Jihyung and
159
+ Cho, Won Ik and
160
+ Lee, Junbum",
161
+ booktitle = "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media",
162
+ month = jul,
163
+ year = "2020",
164
+ address = "Online",
165
+ publisher = "Association for Computational Linguistics",
166
+ url = "https://www.aclweb.org/anthology/2020.socialnlp-1.4",
167
+ pages = "25--31",
168
+ abstract = "Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.",
169
+ }
170
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Human-annotated Korean corpus collected from a popular domestic entertainment news aggregation platform\nfor toxic speech detection. Comments are annotated for gender bias, social bias and hate speech. \n", "citation": "@inproceedings{moon-etal-2020-beep,\n title = \"{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection\",\n author = \"Moon, Jihyung and\n Cho, Won Ik and\n Lee, Junbum\",\n booktitle = \"Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/2020.socialnlp-1.4\",\n pages = \"25--31\",\n abstract = \"Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.\",\n}\n", "homepage": "https://github.com/kocohub/korean-hate-speech", "license": "Creative Commons", "features": {"comments": {"dtype": "string", "id": null, "_type": "Value"}, "contain_gender_bias": {"num_classes": 2, "names": ["False", "True"], "names_file": null, "id": null, "_type": "ClassLabel"}, "bias": {"num_classes": 3, "names": ["none", "gender", "others"], "names_file": null, "id": null, "_type": "ClassLabel"}, "hate": {"num_classes": 3, "names": ["hate", "offensive", "none"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "kor_hate", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 983608, "num_examples": 7896, "dataset_name": "kor_hate"}, "test": {"name": "test", "num_bytes": 58913, "num_examples": 471, "dataset_name": "kor_hate"}}, "download_checksums": {"https://raw.githubusercontent.com/kocohub/korean-hate-speech/master/labeled/train.tsv": {"num_bytes": 913546, "checksum": "ebebacdcd023af2c4acc8c0a37695fb6433ac04fc009feff8f222724e303a5a9"}, "https://raw.githubusercontent.com/kocohub/korean-hate-speech/master/labeled/dev.tsv": {"num_bytes": 54903, "checksum": "232b615d6e359a9d31dfb8370f32e1733dc5bb3f9c5430d34d7fcc7ba4b7e8ef"}}, "download_size": 968449, "post_processing_size": null, "dataset_size": 1042521, "size_in_bytes": 2010970}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3e2ef0d9aa4a887eef3df87f45b461b60f9fdb0ec3837cdf42936b6f8df5bcc
3
+ size 1060
kor_hate.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Korean HateSpeech Dataset"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @inproceedings{moon-etal-2020-beep,
26
+ title = "{BEEP}! {K}orean Corpus of Online News Comments for Toxic Speech Detection",
27
+ author = "Moon, Jihyung and
28
+ Cho, Won Ik and
29
+ Lee, Junbum",
30
+ booktitle = "Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media",
31
+ month = jul,
32
+ year = "2020",
33
+ address = "Online",
34
+ publisher = "Association for Computational Linguistics",
35
+ url = "https://www.aclweb.org/anthology/2020.socialnlp-1.4",
36
+ pages = "25--31",
37
+ abstract = "Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff{'}s alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.",
38
+ }
39
+ """
40
+
41
+ _DESCRIPTION = """\
42
+ Human-annotated Korean corpus collected from a popular domestic entertainment news aggregation platform
43
+ for toxic speech detection. Comments are annotated for gender bias, social bias and hate speech.
44
+ """
45
+
46
+ _HOMEPAGE = "https://github.com/kocohub/korean-hate-speech"
47
+
48
+ _LICENSE = "Creative Commons"
49
+
50
+ _TRAIN_DOWNLOAD_URL = "https://raw.githubusercontent.com/kocohub/korean-hate-speech/master/labeled/train.tsv"
51
+ _TEST_DOWNLOAD_URL = "https://raw.githubusercontent.com/kocohub/korean-hate-speech/master/labeled/dev.tsv"
52
+
53
+
54
+ class KorHate(datasets.GeneratorBasedBuilder):
55
+ """Korean Corpus of Online News Comments for Toxic Speech Detection"""
56
+
57
+ VERSION = datasets.Version("1.1.0")
58
+
59
+ def _info(self):
60
+
61
+ features = datasets.Features(
62
+ {
63
+ "comments": datasets.Value("string"),
64
+ "contain_gender_bias": datasets.features.ClassLabel(names=["False", "True"]),
65
+ "bias": datasets.features.ClassLabel(names=["none", "gender", "others"]),
66
+ "hate": datasets.features.ClassLabel(names=["hate", "offensive", "none"]),
67
+ }
68
+ )
69
+
70
+ return datasets.DatasetInfo(
71
+ description=_DESCRIPTION,
72
+ features=features,
73
+ supervised_keys=None,
74
+ homepage=_HOMEPAGE,
75
+ license=_LICENSE,
76
+ citation=_CITATION,
77
+ )
78
+
79
+ def _split_generators(self, dl_manager):
80
+ train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
81
+ test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
82
+ return [
83
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
84
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
85
+ ]
86
+
87
+ def _generate_examples(self, filepath):
88
+ """Generate Korean HateSpeech examples"""
89
+
90
+ with open(filepath, encoding="utf-8") as tsv_file:
91
+ tsv_reader = csv.DictReader(tsv_file, delimiter="\t", quoting=csv.QUOTE_NONE)
92
+ for id_, row in enumerate(tsv_reader):
93
+ yield id_, {
94
+ "comments": row["comments"],
95
+ "contain_gender_bias": row["contain_gender_bias"],
96
+ "bias": row["bias"],
97
+ "hate": row["hate"],
98
+ }