system HF staff commited on
Commit
940f3bb
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - pl
8
+ licenses:
9
+ - cc-by-nc-sa-1-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ - text-scoring
19
+ task_ids:
20
+ - multi-class-classification
21
+ - multi-label-classification
22
+ - sentiment-classification
23
+ - sentiment-scoring
24
+ - topic-classification
25
+ ---
26
+
27
+
28
+ # Dataset Card for HateSpeechPl
29
+
30
+ ## Table of Contents
31
+ - [Dataset Description](#dataset-description)
32
+ - [Dataset Summary](#dataset-summary)
33
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
34
+ - [Languages](#languages)
35
+ - [Dataset Structure](#dataset-structure)
36
+ - [Data Instances](#data-instances)
37
+ - [Data Fields](#data-instances)
38
+ - [Data Splits](#data-instances)
39
+ - [Dataset Creation](#dataset-creation)
40
+ - [Curation Rationale](#curation-rationale)
41
+ - [Source Data](#source-data)
42
+ - [Annotations](#annotations)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech
56
+ - **Repository:** [N/A]
57
+ - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
58
+ - **Leaderboard:** [N/A]
59
+ - **Point of Contact:** [Marek Troszyński](mtroszynski@civitas.edu.pl), [Aleksander Wawer](axw@ipipan.waw.pl)
60
+
61
+ ### Dataset Summary
62
+
63
+ The dataset was created to analyze the possibility of automating the recognition of hate speech in Polish. It was collected from the Polish forums and represents various types and degrees of offensive language, expressed towards minorities.
64
+
65
+ The original dataset is provided as an export of MySQL tables, what makes it hard to load. Due to that, it was converted to CSV and put to a Github repository.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ - `text-classification`: The dataset might be used to perform the text classification on different target fields, like the presence of irony/sarcasm, minority it describes or a topic.
70
+ - `text-scoring`: The sentiment analysis is another task which might be solved on a dataset.
71
+
72
+ ### Languages
73
+
74
+ Polish, collected from public forums, including the HTML formatting of the text.
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+
80
+ The dataset consists of three collections, originally provided as separate MySQL tables. Here represented as three CSV files.
81
+
82
+ ```
83
+ {
84
+ 'id': 1,
85
+ 'text_id': 121713,
86
+ 'annotator_id': 1,
87
+ 'minority_id': 72,
88
+ 'negative_emotions': false,
89
+ 'call_to_action': false,
90
+ 'source_of_knowledge': 2,
91
+ 'irony_sarcasm': false,
92
+ 'topic': 18,
93
+ 'text': ' <font color=\"blue\"> Niemiec</font> mówi co innego',
94
+ 'rating': 0
95
+ }
96
+ ```
97
+
98
+ ### Data Fields
99
+
100
+ List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
101
+
102
+ - `id`: unique identifier of the entry
103
+ - `text_id`: text identifier, useful when a single text is rated several times by different annotators
104
+ - `annotator_id`: identifier of the person who annotated the text
105
+ - `minority_id`: the internal identifier of the minority described in the text
106
+ - `negative_emotions`: boolean indicator of the presence of negative emotions in the text
107
+ - `call_to_action`: boolean indicator set to true, if the text calls the audience to perform any action, typically with a negative emotions
108
+ - `source_of_knowledge`: categorical variable, describing the source of knowledge for the post rating - 0, 1 or 2 (direct, lexical or contextual, but the description of the meaning for different values couldn't be found)
109
+ - `irony_sarcasm`: boolean indicator of the present of irony or sarcasm
110
+ - `topic`: internal identifier of the topic the text is about
111
+ - `text`: post text content
112
+ - `rating`: integer value, from 0 to 4 - the higher the value, the more negative the text content is
113
+
114
+ ### Data Splits
115
+
116
+ The dataset was not originally split at all.
117
+
118
+ ## Dataset Creation
119
+
120
+ ### Curation Rationale
121
+
122
+ [More Information Needed]
123
+
124
+ ### Source Data
125
+
126
+ The dataset was collected from the public forums.
127
+
128
+ [More Information Needed]
129
+
130
+ #### Initial Data Collection and Normalization
131
+
132
+ [More Information Needed]
133
+
134
+
135
+ #### Who are the source language producers?
136
+
137
+ [More Information Needed]
138
+
139
+ ### Annotations
140
+
141
+ [More Information Needed]
142
+
143
+ #### Annotation process
144
+
145
+ [More Information Needed]
146
+
147
+ #### Who are the annotators?
148
+
149
+ [More Information Needed]
150
+
151
+ ### Personal and Sensitive Information
152
+
153
+ The dataset doesn't contain any personal or sensitive information.
154
+
155
+ ## Considerations for Using the Data
156
+
157
+ ### Social Impact of Dataset
158
+
159
+ The automated hate speech recognition is the main beneficial outcome of using the dataset.
160
+
161
+ ### Discussion of Biases
162
+
163
+ The dataset contains negative posts only and due to that might underrepresent the whole language.
164
+
165
+ ### Other Known Limitations
166
+
167
+ [More Information Needed]
168
+
169
+ ## Additional Information
170
+
171
+ ### Dataset Curators
172
+
173
+ The dataset was created by Marek Troszyński and Aleksander Wawer, during work done at [IPI PAN](https://www.ipipan.waw.pl/).
174
+
175
+ ### Licensing Information
176
+
177
+ According to [Metashare](http://metashare.nlp.ipipan.waw.pl/metashare/repository/browse/polish-hatespeech-corpus/21b7e2366b0011e284b6000423bfd61cbc7616f601724f09bafc8a62c42d56de/), the dataset is licensed under CC-BY-NC-SA, but the version is not mentioned.
178
+
179
+ ### Citation Information
180
+
181
+ ```
182
+ @article{troszynski2017czy,
183
+ title={Czy komputer rozpozna hejtera? Wykorzystanie uczenia maszynowego (ML) w jako{\'s}ciowej analizie danych},
184
+ author={Troszy{\'n}ski, Marek and Wawer, Aleksandra},
185
+ journal={Przegl{\k{a}}d Socjologii Jako{\'s}ciowej},
186
+ volume={13},
187
+ number={2},
188
+ pages={62--80},
189
+ year={2017},
190
+ publisher={Uniwersytet {\L}{\'o}dzki, Wydzia{\l} Ekonomiczno-Socjologiczny, Katedra Socjologii~…}
191
+ }
192
+ ```
193
+
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "HateSpeech corpus in the current version contains over 2000 posts crawled from public Polish web. They represent various types and degrees of offensive language, expressed toward minorities (eg. ethnical, racial). The data were annotated manually.\n", "citation": "@article{troszynski2017czy,\n title={Czy komputer rozpozna hejtera? Wykorzystanie uczenia maszynowego (ML) w jako{'s}ciowej analizie danych},\n author={Troszy{'n}ski, Marek and Wawer, Aleksandra},\n journal={Przegl{\\k{a}}d Socjologii Jako{'s}ciowej},\n volume={13},\n number={2},\n pages={62--80},\n year={2017},\n publisher={Uniwersytet {\\L}{'o}dzki, Wydzia{\\l} Ekonomiczno-Socjologiczny, Katedra Socjologii~\u2026}\n}\n", "homepage": "", "license": "CC BY-NC-SA", "features": {"id": {"dtype": "uint16", "id": null, "_type": "Value"}, "text_id": {"dtype": "uint32", "id": null, "_type": "Value"}, "annotator_id": {"dtype": "uint8", "id": null, "_type": "Value"}, "minority_id": {"dtype": "uint8", "id": null, "_type": "Value"}, "negative_emotions": {"dtype": "bool", "id": null, "_type": "Value"}, "call_to_action": {"dtype": "bool", "id": null, "_type": "Value"}, "source_of_knowledge": {"dtype": "uint8", "id": null, "_type": "Value"}, "irony_sarcasm": {"dtype": "bool", "id": null, "_type": "Value"}, "topic": {"dtype": "uint8", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "rating": {"dtype": "uint8", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hate_speech_pl", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3436190, "num_examples": 13887, "dataset_name": "hate_speech_pl"}}, "download_checksums": {"https://raw.githubusercontent.com/aiembassy/hatespeech-corpus-pl/master/data/fragment_anotatora_2011_ZK.csv": {"num_bytes": 417638, "checksum": "bf5c336c02cf87c9c7f1087ec982e57be2cfde924943916e5de1828a03b25a29"}, "https://raw.githubusercontent.com/aiembassy/hatespeech-corpus-pl/master/data/fragment_anotatora_2011b.csv": {"num_bytes": 2556002, "checksum": "ae2f293eb8cab3c44521eace7dc95e04c033cfb77003babd051ca6a7a8491adb"}, "https://raw.githubusercontent.com/aiembassy/hatespeech-corpus-pl/master/data/fragment_anotatora_2012_luty.csv": {"num_bytes": 904314, "checksum": "1c3f3ce27422ef9cf8e0eae10675f55d0c7463ce31e8e524d9ac29b25cad2f81"}}, "download_size": 3877954, "post_processing_size": null, "dataset_size": 3436190, "size_in_bytes": 7314144}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6203e1f91da0f96a3455403c046f688f3a7ab7da948753ec8ed8fb505b9a9e17
3
+ size 1931
hate_speech_pl.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """HateSpeech Corpus for Polish"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ # Find for instance the citation on arxiv or on the dataset repo/website
25
+ _CITATION = r"""\
26
+ @article{troszynski2017czy,
27
+ title={Czy komputer rozpozna hejtera? Wykorzystanie uczenia maszynowego (ML) w jako{\'s}ciowej analizie danych},
28
+ author={Troszy{\'n}ski, Marek and Wawer, Aleksandra},
29
+ journal={Przegl{\k{a}}d Socjologii Jako{\'s}ciowej},
30
+ volume={13},
31
+ number={2},
32
+ pages={62--80},
33
+ year={2017},
34
+ publisher={Uniwersytet {\L}{\'o}dzki, Wydzia{\l} Ekonomiczno-Socjologiczny, Katedra Socjologii~…}
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ HateSpeech corpus in the current version contains over 2000 posts crawled from public Polish web. They represent various types and degrees of offensive language, expressed toward minorities (eg. ethnical, racial). The data were annotated manually.
40
+ """
41
+
42
+ _HOMEPAGE = "http://zil.ipipan.waw.pl/HateSpeech"
43
+
44
+ _LICENSE = "CC BY-NC-SA"
45
+
46
+ _URLs = [
47
+ "https://raw.githubusercontent.com/aiembassy/hatespeech-corpus-pl/master/data/fragment_anotatora_2011_ZK.csv",
48
+ "https://raw.githubusercontent.com/aiembassy/hatespeech-corpus-pl/master/data/fragment_anotatora_2011b.csv",
49
+ "https://raw.githubusercontent.com/aiembassy/hatespeech-corpus-pl/master/data/fragment_anotatora_2012_luty.csv",
50
+ ]
51
+
52
+
53
+ class HateSpeechPl(datasets.GeneratorBasedBuilder):
54
+ """HateSpeech Corpus for Polish"""
55
+
56
+ VERSION = datasets.Version("1.1.0")
57
+
58
+ def _info(self):
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=datasets.Features(
62
+ {
63
+ "id": datasets.Value("uint16"),
64
+ "text_id": datasets.Value("uint32"),
65
+ "annotator_id": datasets.Value("uint8"),
66
+ "minority_id": datasets.Value("uint8"),
67
+ "negative_emotions": datasets.Value("bool"),
68
+ "call_to_action": datasets.Value("bool"),
69
+ "source_of_knowledge": datasets.Value("uint8"),
70
+ "irony_sarcasm": datasets.Value("bool"),
71
+ "topic": datasets.Value("uint8"),
72
+ "text": datasets.Value("string"),
73
+ "rating": datasets.Value("uint8"),
74
+ }
75
+ ),
76
+ supervised_keys=None,
77
+ license=_LICENSE,
78
+ citation=_CITATION,
79
+ )
80
+
81
+ def _split_generators(self, dl_manager):
82
+ """Returns SplitGenerators."""
83
+ my_urls = _URLs
84
+ filepaths = dl_manager.download(my_urls)
85
+ return [
86
+ datasets.SplitGenerator(
87
+ name=datasets.Split.TRAIN,
88
+ gen_kwargs={
89
+ "filepaths": filepaths,
90
+ },
91
+ ),
92
+ ]
93
+
94
+ def _generate_examples(self, filepaths):
95
+ """ Yields examples. """
96
+ for file_id_, filepath in enumerate(filepaths):
97
+ with open(filepath, encoding="utf-8") as f:
98
+ csv_reader = csv.DictReader(f, delimiter=",", escapechar="\\")
99
+ for id_, data in enumerate(csv_reader):
100
+ yield f"{file_id_}/{id_}", {
101
+ "id": data["id_fragmentu"],
102
+ "text_id": data["id_tekstu"],
103
+ "annotator_id": data["id_anotatora"],
104
+ "minority_id": data["id_mniejszosci"],
105
+ "negative_emotions": data["negatywne_emocje"],
106
+ "call_to_action": data["wezw_ddzial"],
107
+ "source_of_knowledge": data["typ_ramki"],
108
+ "irony_sarcasm": data["ironia_sarkazm"],
109
+ "topic": data["temat"],
110
+ "text": data["tekst"],
111
+ "rating": data["ocena"],
112
+ }