Datasets:

Languages:
Thai
License:
system HF staff commited on
Commit
f842345
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - th
8
+ licenses:
9
+ - cc-by-nc-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - sentiment-classification
20
+ ---
21
+
22
+ # Dataset Card for `thai_toxicity_tweet`
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - **Homepage:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/
50
+ - **Repository:** https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/
51
+ - **Paper:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf
52
+ - **Leaderboard:**
53
+ - **Point of Contact:** https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf
54
+
55
+ ### Dataset Summary
56
+
57
+ Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary.
58
+ The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus
59
+ analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains
60
+ toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing
61
+ target, and word sense ambiguity.
62
+
63
+ Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
64
+ Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ text classification
69
+
70
+ ### Languages
71
+
72
+ Thai (`th`)
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ ```
79
+ {'is_toxic': 0, 'nontoxic_votes': 3, 'toxic_votes': 0, 'tweet_id': '898576382384418817', 'tweet_text': 'วันๆ นี่คุยกะหมา แมว หมู ไก่ ม้า ควาย มากกว่าคุยกับคนไปละ'}
80
+ {'is_toxic': 1, 'nontoxic_votes': 0, 'toxic_votes': 3, 'tweet_id': '898573084981985280', 'tweet_text': 'ควายแดงเมิงด่ารัฐบาลจนรองนายกป่วย พวกมึงกำลังทำลายชาติรู้มั้ย มั้ย มั้ย มั้ยยยยยยยยย news.voicetv.co.th/thailand/51672…'}
81
+ ```
82
+
83
+ ### Data Fields
84
+
85
+ "tweet_id": Id of tweet on Twitter
86
+ "tweet_text": text of the tweet
87
+ "toxic_votes": how many annotators say it is toxic, out of 3 annotators
88
+ "nontoxic_votes": how many annotators say it is NOT toxic, out of 3 annotators
89
+ "is_toxic": 1 if tweet is toxic else 0 (majority rules)
90
+
91
+ ### Data Splits
92
+
93
+ No explicit split is given.
94
+
95
+ ## Dataset Creation
96
+
97
+ ### Curation Rationale
98
+
99
+ The dataset is created as part of [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf).
100
+
101
+ ### Source Data
102
+
103
+ #### Initial Data Collection and Normalization
104
+
105
+ The authors used the public Twitter Search API to collect 9,819 tweets from January–December 2017 based on our keyword dictionary. Then, they selected 75 tweets for each keyword. In total, they collected 3,300 tweets for annotation. To ensure quality of data, they set the following selection criteria.
106
+
107
+ 1. All tweets are selected by humans to prevent word ambiguity. (The Twitter API selected the tweets based on characters in the keyword. For example, in the case of “บ้า(crazy),” the API will also select “บ้านนอก” (countryside)” which is not our target.)
108
+ 2. The length of the tweet should be sufficiently long to discern the context of the tweet. Hence, they set five words as the minimum limit.
109
+ 3. The tweets that contain only extremely toxic words, (for example: “damn, retard, bitch, f*ck, slut!!!”) are not considered.
110
+ 4. In addition, they allowed tweets with English words if they were not critical elements in the labeling decision, for example, the word “f*ck.” As a result, our corpus contains English words, but they are less than 2% of the total.
111
+
112
+ All hashtags, re-tweets, and links were removed from these tweets. However, they did not delete emoticons because these emotional icons can imply the real intent of the post owners. Furthermore, only in the case of annotation, some entries such as the names of famous people were replaced with a tag <ไม่ขอเปิดเผยชื่อ>, for anonymity to prevent individual bias.
113
+
114
+ #### Who are the source language producers?
115
+
116
+ Twitter users in Thailand
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ We manually annotated our dataset with two labels: Toxic and Non-Toxic. We define a message as toxic if it indicates any harmful, damage, or negative intent based on our definition of toxicity. Furthermore, all the tweets were annotated by three annotators to identify toxicity; the conditions used for this identification are presented in the following list.
123
+
124
+ - A toxic message is a message that should be deleted or not be allowed in public.
125
+ - A message’s target or consequence must exist. It can either be an individual or a generalized group based on a commonality such as religion or ethnicity, or an entire community.
126
+ - Self-complain is not considered toxic, because it is not harmful to anyone. However, if self-complain is intended to indicate something bad, it will be considered as toxic.
127
+ - Both direct and indirect messages including those with sarcasm are taken into consideration.
128
+
129
+ We strictly instructed all the annotators about these concepts and asked them to perform a small test to ensure they understood these conditions. The annotation process was divided into two rounds. We asked the candidates to annotate their answers in the first round to learn our annotation standard. Then, we asked them to annotate a different dataset and selected the ones who obtained a full-score for the second round as an annotator. From among these annotators, 20% of the annotators failed the first round and were not involved in the final annotation.
130
+
131
+ #### Who are the annotators?
132
+
133
+ Three annotators hired by [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf)
134
+
135
+ ### Personal and Sensitive Information
136
+
137
+ Despite all tweets being public, due to the nature of toxic tweets, there might be personal attacks and toxic language used.
138
+
139
+ ## Considerations for Using the Data
140
+
141
+ ### Social Impact of Dataset
142
+
143
+ - toxic social media message classification dataset
144
+
145
+ ### Discussion of Biases
146
+
147
+ - Users are masked before annotation by the annotators to prevent biases based on tweet authors
148
+
149
+ ### Other Known Limitations
150
+
151
+ - The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
152
+
153
+ ## Additional Information
154
+
155
+ ### Dataset Curators
156
+
157
+ [Sirihattasak et al (2019)](https://www.ta-cos.org/sites/ta-cos.org/files/1_W32.pdf)
158
+
159
+ ### Licensing Information
160
+
161
+ CC-BY-NC 3.0
162
+
163
+ ### Citation Information
164
+
165
+ Please cite the following if you make use of the dataset:
166
+
167
+ ```
168
+ @article{sirihattasak2019annotation,
169
+ title={Annotation and Classification of Toxicity for Thai Twitter},
170
+ author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi},
171
+ year={2019}
172
+ }
173
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"thai_toxicity_tweet": {"description": "Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary.\nThe author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus\nanalysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains\ntoxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing\ntarget, and word sense ambiguity.\n\nNotes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020.\nBy this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. \nProcessing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).\n", "citation": "@article{sirihattasak2019annotation,\n title={Annotation and Classification of Toxicity for Thai Twitter},\n author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi},\n year={2019}\n}\n", "homepage": "https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/", "license": "", "features": {"tweet_id": {"dtype": "string", "id": null, "_type": "Value"}, "tweet_text": {"dtype": "string", "id": null, "_type": "Value"}, "toxic_votes": {"dtype": "int32", "id": null, "_type": "Value"}, "nontoxic_votes": {"dtype": "int32", "id": null, "_type": "Value"}, "is_toxic": {"num_classes": 2, "names": ["neg", "pos"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "thai_toxicity_tweet", "config_name": "thai_toxicity_tweet", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 637387, "num_examples": 3300, "dataset_name": "thai_toxicity_tweet"}}, "download_checksums": {"https://archive.org/download/ThaiToxicityTweetCorpus/data.zip": {"num_bytes": 194740, "checksum": "4d2af31fe7398e31a5b9ff1a6cc2b4f57cdd4a1eff3b7f49a8157acc31b109eb"}}, "download_size": 194740, "post_processing_size": null, "dataset_size": 637387, "size_in_bytes": 832127}}
dummy/thai_toxicity_tweet/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:951cb700c86670fd4cb0cb05b8eebf97531daba27c452a30de9877dc50b20314
3
+ size 1024
thai_toxicity_tweet.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ # Lint as: python3
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @article{sirihattasak2019annotation,
27
+ title={Annotation and Classification of Toxicity for Thai Twitter},
28
+ author={Sirihattasak, Sugan and Komachi, Mamoru and Ishikawa, Hiroshi},
29
+ year={2019}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ Thai Toxicity Tweet Corpus contains 3,300 tweets annotated by humans with guidelines including a 44-word dictionary.
35
+ The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus
36
+ analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains
37
+ toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing
38
+ target, and word sense ambiguity.
39
+
40
+ Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020.
41
+ By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.
42
+ Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
43
+ """
44
+
45
+
46
+ class ThaiToxicityTweetConfig(datasets.BuilderConfig):
47
+ """BuilderConfig"""
48
+
49
+ def __init__(self, **kwargs):
50
+ """BuilderConfig
51
+
52
+ Args:
53
+ **kwargs: keyword arguments forwarded to super.
54
+ """
55
+ super(ThaiToxicityTweetConfig, self).__init__(**kwargs)
56
+
57
+
58
+ class ThaiToxicityTweet(datasets.GeneratorBasedBuilder):
59
+
60
+ _DOWNLOAD_URL = "https://archive.org/download/ThaiToxicityTweetCorpus/data.zip"
61
+ _TRAIN_FILE = "train.jsonl"
62
+
63
+ BUILDER_CONFIGS = [
64
+ ThaiToxicityTweetConfig(
65
+ name="thai_toxicity_tweet",
66
+ version=datasets.Version("1.0.0"),
67
+ description=_DESCRIPTION,
68
+ ),
69
+ ]
70
+
71
+ def _info(self):
72
+ return datasets.DatasetInfo(
73
+ description=_DESCRIPTION,
74
+ features=datasets.Features(
75
+ {
76
+ "tweet_id": datasets.Value("string"),
77
+ "tweet_text": datasets.Value("string"),
78
+ "toxic_votes": datasets.Value("int32"),
79
+ "nontoxic_votes": datasets.Value("int32"),
80
+ "is_toxic": datasets.features.ClassLabel(names=["neg", "pos"]),
81
+ }
82
+ ),
83
+ supervised_keys=None,
84
+ homepage="https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/",
85
+ citation=_CITATION,
86
+ )
87
+
88
+ def _split_generators(self, dl_manager):
89
+ arch_path = dl_manager.download_and_extract(self._DOWNLOAD_URL)
90
+ data_dir = os.path.join(arch_path, "data")
91
+ return [
92
+ datasets.SplitGenerator(
93
+ name=datasets.Split.TRAIN,
94
+ gen_kwargs={"filepath": os.path.join(data_dir, self._TRAIN_FILE)},
95
+ ),
96
+ ]
97
+
98
+ def _generate_examples(self, filepath):
99
+ """Generate examples."""
100
+ with open(filepath, encoding="utf-8") as f:
101
+ for id_, row in enumerate(f):
102
+ data = json.loads(row)
103
+ yield id_, {
104
+ "tweet_id": str(data["tweet_id"]),
105
+ "tweet_text": data["tweet_text"],
106
+ "toxic_votes": data["toxic_votes"],
107
+ "nontoxic_votes": data["nontoxic_votes"],
108
+ "is_toxic": data["is_toxic"],
109
+ }