system HF staff commited on
Commit
a265511
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ english:
8
+ - en
9
+ malayalam:
10
+ - en
11
+ - ml
12
+ tamil:
13
+ - en
14
+ - ta
15
+ licenses:
16
+ - cc-by-4-0
17
+ multilinguality:
18
+ english:
19
+ - monolingual
20
+ malayalam:
21
+ - multilingual
22
+ tamil:
23
+ - multilingual
24
+ size_categories:
25
+ english:
26
+ - 10K<n<100K
27
+ malayalam:
28
+ - 1K<n<10K
29
+ tamil:
30
+ - 10K<n<100K
31
+ source_datasets:
32
+ - original
33
+ task_categories:
34
+ - text-classification
35
+ task_ids:
36
+ - text-classification-other-hope-speech-classification
37
+ ---
38
+
39
+ # Dataset Card for [Dataset Name]
40
+
41
+ ## Table of Contents
42
+ - [Dataset Description](#dataset-description)
43
+ - [Dataset Summary](#dataset-summary)
44
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
45
+ - [Languages](#languages)
46
+ - [Dataset Structure](#dataset-structure)
47
+ - [Data Instances](#data-instances)
48
+ - [Data Fields](#data-instances)
49
+ - [Data Splits](#data-instances)
50
+ - [Dataset Creation](#dataset-creation)
51
+ - [Curation Rationale](#curation-rationale)
52
+ - [Source Data](#source-data)
53
+ - [Annotations](#annotations)
54
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
55
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
56
+ - [Social Impact of Dataset](#social-impact-of-dataset)
57
+ - [Discussion of Biases](#discussion-of-biases)
58
+ - [Other Known Limitations](#other-known-limitations)
59
+ - [Additional Information](#additional-information)
60
+ - [Dataset Curators](#dataset-curators)
61
+ - [Licensing Information](#licensing-information)
62
+ - [Citation Information](#citation-information)
63
+
64
+ ## Dataset Description
65
+
66
+ - **Homepage:** [Hope Speech Detection for Equality, Diversity, and Inclusion-EACL 2021](https://competitions.codalab.org/competitions/27653#learn_the_details)
67
+ - **Repository:** [HopeEDI data repository](https://competitions.codalab.org/competitions/27653#participate-get_data)
68
+ - **Paper:** [HopeEDI: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion](https://www.aclweb.org/anthology/2020.peoples-1.5/)
69
+ - **Leaderboard:** [Rank list](https://competitions.codalab.org/competitions/27653#results)
70
+ - **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:bharathiraja.akr@gmail.com)
71
+
72
+
73
+ ### Dataset Summary
74
+
75
+ A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting.
76
+
77
+ ### Supported Tasks and Leaderboards
78
+
79
+ To identify hope speech in the comments/posts in social media.
80
+
81
+ ### Languages
82
+
83
+ English, Tamil and Malayalam
84
+
85
+ ## Dataset Structure
86
+
87
+ ### Data Instances
88
+
89
+ An example from the English dataset looks as follows:
90
+
91
+ | text | label |
92
+ | :------ | :----- |
93
+ | all lives matter .without that we never have peace so to me forever all lives matter. | Hope_speech |
94
+ | I think it's cool that you give people a voice to speak out with here on this channel. | Hope_speech |
95
+
96
+
97
+ An example from the Tamil dataset looks as follows:
98
+
99
+ | text | label |
100
+ | :------ | :----- |
101
+ | Idha solla ivalo naala | Non_hope_speech |
102
+ | இன்று தேசிய பெண் குழந்தைகள் தினம்.. பெண் குழந்தைகளை போற்றுவோம்..அவர்களை பாதுகாப்போம்... | Hope_speech |
103
+
104
+
105
+ An example from the Malayalam dataset looks as follows:
106
+
107
+ | text | label |
108
+ | :------ | :----- |
109
+ | ഇത്രെയും കഷ്ടപ്പെട്ട് വളർത്തിയ ആ അമ്മയുടെ മുഖം കണ്ടപ്പോൾ കണ്ണ് നിറഞ്ഞു പോയി | Hope_speech |
110
+ | snehikunavar aanayalum pennayalum onnichu jeevikatte..aareyum compel cheythitallalooo..parasparamulla ishtathodeyalle...avarum jeevikatte..🥰🥰 | Hope_speech |
111
+
112
+ ### Data Fields
113
+
114
+ English
115
+ - `text`: English comment.
116
+ - `label`: list of the possible values: "Hope_speech", "Non_hope_speech", "not-English"
117
+
118
+ Tamil
119
+ - `text`: Tamil-English code mixed comment.
120
+ - `label`: list of the possible values: "Hope_speech", "Non_hope_speech", "not-Tamil"
121
+
122
+ Malayalam
123
+ - `text`: Malayalam-English code mixed comment.
124
+ - `label`: list of the possible values: "Hope_speech", "Non_hope_speech", "not-malayalam"
125
+
126
+
127
+ ### Data Splits
128
+
129
+
130
+ | | Tain | Valid |
131
+ | ----- | ------: | -----: |
132
+ | English | 22762 | 2843 |
133
+ | Tamil | 16160 | 2018 |
134
+ | Malayalam | 8564 | 1070 |
135
+
136
+ ## Dataset Creation
137
+
138
+ ### Curation Rationale
139
+ Hope is considered significant for the well-being, recuperation and restoration of human life by health professionals.
140
+ Hate speech or offensive language detection dataset is not available for code-mixed Tamil and code-mixed Malayalam, and it does not take into account LGBTIQ, women in STEM and other minorities. Thus, we cannot use existing hate speech or offensive language detection datasets to detect hope or non-hope for EDI of minorities.
141
+
142
+ ### Source Data
143
+
144
+ #### Initial Data Collection and Normalization
145
+
146
+ For English, we collected data on recent topics of EDI, including women in STEM, LGBTIQ issues, COVID-19, Black Lives Matters, United Kingdom (UK) versus China, United States of America (USA) versus China and Australia versus China from YouTube video comments. The data was collected from videos of people from English-speaking countries, such as Australia, Canada, the Republic of Ireland, United Kingdom, the United States of America and New Zealand.
147
+
148
+ For Tamil and Malayalam, we collected data from India on the recent topics regarding LGBTIQ issues, COVID-19, women in STEM, the Indo-China war and Dravidian affairs.
149
+
150
+ #### Who are the source language producers?
151
+
152
+ Youtube users
153
+
154
+ ### Annotations
155
+
156
+ #### Annotation process
157
+
158
+ We created Google forms to collect annotations from annotators. Each form contained a maximum of 100 comments, and each page contained a maximum of 10 comments to maintain the quality of annotation. We collected information on the gender, educational background and the medium of schooling of the annotator to know the diversity of the annotator and avoid bias. We educated annotators by providing them with YouTube videos on EDI. A minimum of three annotators annotated each form.
159
+
160
+ #### Who are the annotators?
161
+
162
+ For English language comments, annotators were from Australia, the Republic of Ireland, the United Kingdom and the United States of America. For Tamil, we were able to get annotations from both people from the state of Tamil Nadu of India and from Sri Lanka. Most of the annotators were graduate or post-graduate students.
163
+
164
+ ### Personal and Sensitive Information
165
+
166
+ Social media data is highly sensitive, and even more so when it is related to the minority population, such as the LGBTIQ community or women. We have taken full consideration to minimise the risk associated with individual identity in the data by removing personal information from dataset, such as names but not celebrity names. However, to study EDI, we needed to keep information relating to the following characteristics; racial, gender, sexual orientation, ethnic origin and philosophical beliefs. Annotators were only shown anonymised posts and agreed to make no attempts to contact the comment creator. The dataset will only be made available for research purpose to the researcher who agree to follow ethical
167
+ guidelines
168
+
169
+ ## Considerations for Using the Data
170
+
171
+ ### Social Impact of Dataset
172
+
173
+ [More Information Needed]
174
+
175
+ ### Discussion of Biases
176
+
177
+ [More Information Needed]
178
+
179
+ ### Other Known Limitations
180
+
181
+ [More Information Needed]
182
+
183
+ ## Additional Information
184
+
185
+ ### Dataset Curators
186
+
187
+ [More Information Needed]
188
+
189
+ ### Licensing Information
190
+
191
+ This work is licensed under a [Creative Commons Attribution 4.0 International Licence](http://creativecommons.org/licenses/by/4.0/.)
192
+
193
+ ### Citation Information
194
+
195
+ ```
196
+ @inproceedings{chakravarthi-2020-hopeedi,
197
+ title = "{H}ope{EDI}: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion",
198
+ author = "Chakravarthi, Bharathi Raja",
199
+ booktitle = "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
200
+ month = dec,
201
+ year = "2020",
202
+ address = "Barcelona, Spain (Online)",
203
+ publisher = "Association for Computational Linguistics",
204
+ url = "https://www.aclweb.org/anthology/2020.peoples-1.5",
205
+ pages = "41--53",
206
+ abstract = "Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff{'}s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.",
207
+ }
208
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"english": {"description": "A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not.\n", "citation": "@inproceedings{chakravarthi-2020-hopeedi,\ntitle = \"{H}ope{EDI}: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion\",\nauthor = \"Chakravarthi, Bharathi Raja\",\nbooktitle = \"Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media\",\nmonth = dec,\nyear = \"2020\",\naddress = \"Barcelona, Spain (Online)\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"https://www.aclweb.org/anthology/2020.peoples-1.5\",\npages = \"41--53\",\nabstract = \"Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff{'}s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.\",\n}\n", "homepage": "https://competitions.codalab.org/competitions/27653#learn_the_details", "license": "Creative Commons Attribution 4.0 International Licence", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["Hope_speech", "Non_hope_speech", "not-English"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hope_edi", "config_name": "english", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2306656, "num_examples": 22762, "dataset_name": "hope_edi"}, "validation": {"name": "validation", "num_bytes": 288663, "num_examples": 2843, "dataset_name": "hope_edi"}}, "download_checksums": {"https://drive.google.com/u/0/uc?id=1ydsOTvBZXKqcRvXawOuePrJ99slOEbkk&export=download": {"num_bytes": 2435280, "checksum": "93f6b7d34e2848ea283292a67458947bed117bee612f755477b98c0f475e9c28"}, "https://drive.google.com/u/0/uc?id=1pvpPA97kybx5IyotR9HNuqP4T5ktEtr4&export=download": {"num_bytes": 304621, "checksum": "c461286b074bea53b3fd239ea0dc70515c4d842fc7e34b39a7c78c7a9b7f993d"}}, "download_size": 2739901, "post_processing_size": null, "dataset_size": 2595319, "size_in_bytes": 5335220}, "tamil": {"description": "A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not.\n", "citation": "@inproceedings{chakravarthi-2020-hopeedi,\ntitle = \"{H}ope{EDI}: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion\",\nauthor = \"Chakravarthi, Bharathi Raja\",\nbooktitle = \"Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media\",\nmonth = dec,\nyear = \"2020\",\naddress = \"Barcelona, Spain (Online)\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"https://www.aclweb.org/anthology/2020.peoples-1.5\",\npages = \"41--53\",\nabstract = \"Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff{'}s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.\",\n}\n", "homepage": "https://competitions.codalab.org/competitions/27653#learn_the_details", "license": "Creative Commons Attribution 4.0 International Licence", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["Hope_speech", "Non_hope_speech", "not-Tamil"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hope_edi", "config_name": "tamil", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1531013, "num_examples": 16160, "dataset_name": "hope_edi"}, "validation": {"name": "validation", "num_bytes": 197378, "num_examples": 2018, "dataset_name": "hope_edi"}}, "download_checksums": {"https://drive.google.com/u/0/uc?id=1R1jR4DcH2UEaM1ZwDSRHdfTGvkCNu6NW&export=download": {"num_bytes": 1590891, "checksum": "b22dafb5fe05d3eca06046ab9d4992151c9e313613022c02f7ec2bde6f973ac8"}, "https://drive.google.com/u/0/uc?id=1cTaA6OCZUaepl5D-utPw2ZmbonPcw52v&export=download": {"num_bytes": 204876, "checksum": "451833c98dae2a43f70f956bbb55fb545663a84ba0cee456d80a15fabcccf1ed"}}, "download_size": 1795767, "post_processing_size": null, "dataset_size": 1728391, "size_in_bytes": 3524158}, "malayalam": {"description": "A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not.\n", "citation": "@inproceedings{chakravarthi-2020-hopeedi,\ntitle = \"{H}ope{EDI}: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion\",\nauthor = \"Chakravarthi, Bharathi Raja\",\nbooktitle = \"Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media\",\nmonth = dec,\nyear = \"2020\",\naddress = \"Barcelona, Spain (Online)\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"https://www.aclweb.org/anthology/2020.peoples-1.5\",\npages = \"41--53\",\nabstract = \"Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff{'}s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.\",\n}\n", "homepage": "https://competitions.codalab.org/competitions/27653#learn_the_details", "license": "Creative Commons Attribution 4.0 International Licence", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["Hope_speech", "Non_hope_speech", "not-malayalam"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "hope_edi", "config_name": "malayalam", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1492031, "num_examples": 8564, "dataset_name": "hope_edi"}, "validation": {"name": "validation", "num_bytes": 180713, "num_examples": 1070, "dataset_name": "hope_edi"}}, "download_checksums": {"https://drive.google.com/u/0/uc?id=1wxwqnWGRzwvc_-ugRoFX8BPgpO3Q7sch&export=download": {"num_bytes": 1535357, "checksum": "359e689d08416a611035128f7743a16ea723865bc3f4b89a87cf6b07c4346c6b"}, "https://drive.google.com/u/0/uc?id=1uZ0U9VJQEUPQItPpTJKXH8u_6jXppvJ1&export=download": {"num_bytes": 186177, "checksum": "b09068837b2600cd94313c931ab83fb3d507e0a774bbce080b79750c2a3c62b7"}}, "download_size": 1721534, "post_processing_size": null, "dataset_size": 1672744, "size_in_bytes": 3394278}}
dummy/english/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dda8bd60c06dd3294769588d8c270621c77e1372cb7a280b3fca8207d355ecbd
3
+ size 1319
dummy/malayalam/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0883548c6ed1aef04aa19704f8df589a757fe658a6fe9b26b543dd4b8ec25ca
3
+ size 2083
dummy/tamil/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b43329592e414ac553601dbd2e21a53aafe7451cf5d4d5a551523e6329cbc5ef
3
+ size 1153
hope_edi.py ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI)"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+
23
+
24
+ _HOMEPAGE = "https://competitions.codalab.org/competitions/27653#learn_the_details"
25
+
26
+
27
+ _CITATION = """\
28
+ @inproceedings{chakravarthi-2020-hopeedi,
29
+ title = "{H}ope{EDI}: A Multilingual Hope Speech Detection Dataset for Equality, Diversity, and Inclusion",
30
+ author = "Chakravarthi, Bharathi Raja",
31
+ booktitle = "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
32
+ month = dec,
33
+ year = "2020",
34
+ address = "Barcelona, Spain (Online)",
35
+ publisher = "Association for Computational Linguistics",
36
+ url = "https://www.aclweb.org/anthology/2020.peoples-1.5",
37
+ pages = "41--53",
38
+ abstract = "Over the past few years, systems have been developed to control online content and eliminate abusive, offensive or hate speech content. However, people in power sometimes misuse this form of censorship to obstruct the democratic right of freedom of speech. Therefore, it is imperative that research should take a positive reinforcement approach towards online content that is encouraging, positive and supportive contents. Until now, most studies have focused on solving this problem of negativity in the English language, though the problem is much more than just harmful content. Furthermore, it is multilingual as well. Thus, we have constructed a Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not. To our knowledge, this is the first research of its kind to annotate hope speech for equality, diversity and inclusion in a multilingual setting. We determined that the inter-annotator agreement of our dataset using Krippendorff{'}s alpha. Further, we created several baselines to benchmark the resulting dataset and the results have been expressed using precision, recall and F1-score. The dataset is publicly available for the research community. We hope that this resource will spur further research on encouraging inclusive and responsive speech that reinforces positiveness.",
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ A Hope Speech dataset for Equality, Diversity and Inclusion (HopeEDI) containing user-generated comments from the social media platform YouTube with 28,451, 20,198 and 10,705 comments in English, Tamil and Malayalam, respectively, manually labelled as containing hope speech or not.
44
+ """
45
+
46
+ _LICENSE = "Creative Commons Attribution 4.0 International Licence"
47
+
48
+ _URLs = {
49
+ "english": {
50
+ "TRAIN_DOWNLOAD_URL": "https://drive.google.com/u/0/uc?id=1ydsOTvBZXKqcRvXawOuePrJ99slOEbkk&export=download",
51
+ "VALIDATION_DOWNLOAD_URL": "https://drive.google.com/u/0/uc?id=1pvpPA97kybx5IyotR9HNuqP4T5ktEtr4&export=download",
52
+ },
53
+ "tamil": {
54
+ "TRAIN_DOWNLOAD_URL": "https://drive.google.com/u/0/uc?id=1R1jR4DcH2UEaM1ZwDSRHdfTGvkCNu6NW&export=download",
55
+ "VALIDATION_DOWNLOAD_URL": "https://drive.google.com/u/0/uc?id=1cTaA6OCZUaepl5D-utPw2ZmbonPcw52v&export=download",
56
+ },
57
+ "malayalam": {
58
+ "TRAIN_DOWNLOAD_URL": "https://drive.google.com/u/0/uc?id=1wxwqnWGRzwvc_-ugRoFX8BPgpO3Q7sch&export=download",
59
+ "VALIDATION_DOWNLOAD_URL": "https://drive.google.com/u/0/uc?id=1uZ0U9VJQEUPQItPpTJKXH8u_6jXppvJ1&export=download",
60
+ },
61
+ }
62
+
63
+
64
+ class HopeEdi(datasets.GeneratorBasedBuilder):
65
+ """HopeEDI dataset."""
66
+
67
+ VERSION = datasets.Version("1.0.0")
68
+
69
+ BUILDER_CONFIGS = [
70
+ datasets.BuilderConfig(
71
+ name="english", version=VERSION, description="This part of my dataset covers English dataset"
72
+ ),
73
+ datasets.BuilderConfig(
74
+ name="tamil", version=VERSION, description="This part of my dataset covers Tamil dataset"
75
+ ),
76
+ datasets.BuilderConfig(
77
+ name="malayalam", version=VERSION, description="This part of my dataset covers Tamil dataset"
78
+ ),
79
+ ]
80
+
81
+ def _info(self):
82
+
83
+ if self.config.name == "english": # This is the name of the configuration selected in BUILDER_CONFIGS above
84
+ features = datasets.Features(
85
+ {
86
+ "text": datasets.Value("string"),
87
+ "label": datasets.features.ClassLabel(names=["Hope_speech", "Non_hope_speech", "not-English"]),
88
+ }
89
+ )
90
+ elif self.config.name == "tamil":
91
+ features = datasets.Features(
92
+ {
93
+ "text": datasets.Value("string"),
94
+ "label": datasets.features.ClassLabel(names=["Hope_speech", "Non_hope_speech", "not-Tamil"]),
95
+ }
96
+ )
97
+
98
+ # else self.config.name == "malayalam":
99
+ else:
100
+ features = datasets.Features(
101
+ {
102
+ "text": datasets.Value("string"),
103
+ "label": datasets.features.ClassLabel(names=["Hope_speech", "Non_hope_speech", "not-malayalam"]),
104
+ }
105
+ )
106
+
107
+ return datasets.DatasetInfo(
108
+ # This is the description that will appear on the datasets page.
109
+ description=_DESCRIPTION,
110
+ # This defines the different columns of the dataset and their types
111
+ features=features, # Here we define them above because they are different between the two configurations
112
+ # If there's a common (input, target) tuple from the features,
113
+ # specify them here. They'll be used if as_supervised=True in
114
+ # builder.as_dataset.
115
+ supervised_keys=None,
116
+ # Homepage of the dataset for documentation
117
+ homepage=_HOMEPAGE,
118
+ # License for the dataset if available
119
+ license=_LICENSE,
120
+ # Citation for the dataset
121
+ citation=_CITATION,
122
+ )
123
+
124
+ def _split_generators(self, dl_manager):
125
+ """Returns SplitGenerators."""
126
+
127
+ my_urls = _URLs[self.config.name]
128
+
129
+ train_path = dl_manager.download_and_extract(my_urls["TRAIN_DOWNLOAD_URL"])
130
+ validation_path = dl_manager.download_and_extract(my_urls["VALIDATION_DOWNLOAD_URL"])
131
+
132
+ return [
133
+ datasets.SplitGenerator(
134
+ name=datasets.Split.TRAIN,
135
+ gen_kwargs={
136
+ "filepath": train_path,
137
+ "split": "train",
138
+ },
139
+ ),
140
+ datasets.SplitGenerator(
141
+ name=datasets.Split.VALIDATION,
142
+ gen_kwargs={
143
+ "filepath": validation_path,
144
+ "split": "validation",
145
+ },
146
+ ),
147
+ ]
148
+
149
+ def _generate_examples(self, filepath, split):
150
+ """Generate HopeEDI examples."""
151
+
152
+ with open(filepath, encoding="utf-8") as csv_file:
153
+ csv_reader = csv.reader(
154
+ csv_file, quotechar='"', delimiter="\t", quoting=csv.QUOTE_NONE, skipinitialspace=False
155
+ )
156
+
157
+ for id_, row in enumerate(csv_reader):
158
+ text, label, dummy = row
159
+ yield id_, {"text": text, "label": label}