amaniabuzaid commited on
Commit
3198cac
1 Parent(s): 6967db4

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +278 -0
  2. dataset_infos.json +1 -0
  3. emotion.py +88 -0
README.md ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - machine-generated
6
+ language:
7
+ - en
8
+ license:
9
+ - other
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - multi-class-classification
20
+ paperswithcode_id: emotion
21
+ pretty_name: Emotion
22
+ tags:
23
+ - emotion-classification
24
+ dataset_info:
25
+ - config_name: split
26
+ features:
27
+ - name: text
28
+ dtype: string
29
+ - name: label
30
+ dtype:
31
+ class_label:
32
+ names:
33
+ '0': sadness
34
+ '1': joy
35
+ '2': love
36
+ '3': anger
37
+ '4': fear
38
+ '5': surprise
39
+ splits:
40
+ - name: train
41
+ num_bytes: 1741597
42
+ num_examples: 16000
43
+ - name: validation
44
+ num_bytes: 214703
45
+ num_examples: 2000
46
+ - name: test
47
+ num_bytes: 217181
48
+ num_examples: 2000
49
+ download_size: 740883
50
+ dataset_size: 2173481
51
+ - config_name: unsplit
52
+ features:
53
+ - name: text
54
+ dtype: string
55
+ - name: label
56
+ dtype:
57
+ class_label:
58
+ names:
59
+ '0': sadness
60
+ '1': joy
61
+ '2': love
62
+ '3': anger
63
+ '4': fear
64
+ '5': surprise
65
+ splits:
66
+ - name: train
67
+ num_bytes: 45445685
68
+ num_examples: 416809
69
+ download_size: 15388281
70
+ dataset_size: 45445685
71
+ train-eval-index:
72
+ - config: default
73
+ task: text-classification
74
+ task_id: multi_class_classification
75
+ splits:
76
+ train_split: train
77
+ eval_split: test
78
+ col_mapping:
79
+ text: text
80
+ label: target
81
+ metrics:
82
+ - type: accuracy
83
+ name: Accuracy
84
+ - type: f1
85
+ name: F1 macro
86
+ args:
87
+ average: macro
88
+ - type: f1
89
+ name: F1 micro
90
+ args:
91
+ average: micro
92
+ - type: f1
93
+ name: F1 weighted
94
+ args:
95
+ average: weighted
96
+ - type: precision
97
+ name: Precision macro
98
+ args:
99
+ average: macro
100
+ - type: precision
101
+ name: Precision micro
102
+ args:
103
+ average: micro
104
+ - type: precision
105
+ name: Precision weighted
106
+ args:
107
+ average: weighted
108
+ - type: recall
109
+ name: Recall macro
110
+ args:
111
+ average: macro
112
+ - type: recall
113
+ name: Recall micro
114
+ args:
115
+ average: micro
116
+ - type: recall
117
+ name: Recall weighted
118
+ args:
119
+ average: weighted
120
+ ---
121
+
122
+ # Dataset Card for "emotion"
123
+
124
+ ## Table of Contents
125
+ - [Dataset Description](#dataset-description)
126
+ - [Dataset Summary](#dataset-summary)
127
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
128
+ - [Languages](#languages)
129
+ - [Dataset Structure](#dataset-structure)
130
+ - [Data Instances](#data-instances)
131
+ - [Data Fields](#data-fields)
132
+ - [Data Splits](#data-splits)
133
+ - [Dataset Creation](#dataset-creation)
134
+ - [Curation Rationale](#curation-rationale)
135
+ - [Source Data](#source-data)
136
+ - [Annotations](#annotations)
137
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
138
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
139
+ - [Social Impact of Dataset](#social-impact-of-dataset)
140
+ - [Discussion of Biases](#discussion-of-biases)
141
+ - [Other Known Limitations](#other-known-limitations)
142
+ - [Additional Information](#additional-information)
143
+ - [Dataset Curators](#dataset-curators)
144
+ - [Licensing Information](#licensing-information)
145
+ - [Citation Information](#citation-information)
146
+ - [Contributions](#contributions)
147
+
148
+ ## Dataset Description
149
+
150
+ - **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
151
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
152
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+ - **Size of downloaded dataset files:** 16.13 MB
155
+ - **Size of the generated dataset:** 47.62 MB
156
+ - **Total amount of disk used:** 63.75 MB
157
+
158
+ ### Dataset Summary
159
+
160
+ Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
161
+
162
+ ### Supported Tasks and Leaderboards
163
+
164
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
165
+
166
+ ### Languages
167
+
168
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
169
+
170
+ ## Dataset Structure
171
+
172
+ ### Data Instances
173
+
174
+ An example looks as follows.
175
+ ```
176
+ {
177
+ "text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
178
+ "label": 0
179
+ }
180
+ ```
181
+
182
+ ### Data Fields
183
+
184
+ The data fields are:
185
+ - `text`: a `string` feature.
186
+ - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
187
+
188
+ ### Data Splits
189
+
190
+ The dataset has 2 configurations:
191
+ - split: with a total of 20_000 examples split into train, validation and split
192
+ - unsplit: with a total of 416_809 examples in a single train split
193
+
194
+ | name | train | validation | test |
195
+ |---------|-------:|-----------:|-----:|
196
+ | split | 16000 | 2000 | 2000 |
197
+ | unsplit | 416809 | n/a | n/a |
198
+
199
+ ## Dataset Creation
200
+
201
+ ### Curation Rationale
202
+
203
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
+
205
+ ### Source Data
206
+
207
+ #### Initial Data Collection and Normalization
208
+
209
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
210
+
211
+ #### Who are the source language producers?
212
+
213
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
214
+
215
+ ### Annotations
216
+
217
+ #### Annotation process
218
+
219
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
220
+
221
+ #### Who are the annotators?
222
+
223
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
+
225
+ ### Personal and Sensitive Information
226
+
227
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
+
229
+ ## Considerations for Using the Data
230
+
231
+ ### Social Impact of Dataset
232
+
233
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
234
+
235
+ ### Discussion of Biases
236
+
237
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
+
239
+ ### Other Known Limitations
240
+
241
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
242
+
243
+ ## Additional Information
244
+
245
+ ### Dataset Curators
246
+
247
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
248
+
249
+ ### Licensing Information
250
+
251
+ The dataset should be used for educational and research purposes only.
252
+
253
+ ### Citation Information
254
+
255
+ If you use this dataset, please cite:
256
+ ```
257
+ @inproceedings{saravia-etal-2018-carer,
258
+ title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
259
+ author = "Saravia, Elvis and
260
+ Liu, Hsien-Chi Toby and
261
+ Huang, Yen-Hao and
262
+ Wu, Junlin and
263
+ Chen, Yi-Shin",
264
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
265
+ month = oct # "-" # nov,
266
+ year = "2018",
267
+ address = "Brussels, Belgium",
268
+ publisher = "Association for Computational Linguistics",
269
+ url = "https://www.aclweb.org/anthology/D18-1404",
270
+ doi = "10.18653/v1/D18-1404",
271
+ pages = "3687--3697",
272
+ abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
273
+ }
274
+ ```
275
+
276
+ ### Contributions
277
+
278
+ Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.\n", "citation": "@inproceedings{saravia-etal-2018-carer,\n title = \"{CARER}: Contextualized Affect Representations for Emotion Recognition\",\n author = \"Saravia, Elvis and\n Liu, Hsien-Chi Toby and\n Huang, Yen-Hao and\n Wu, Junlin and\n Chen, Yi-Shin\",\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing\",\n month = oct # \"-\" # nov,\n year = \"2018\",\n address = \"Brussels, Belgium\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/D18-1404\",\n doi = \"10.18653/v1/D18-1404\",\n pages = \"3687--3697\",\n abstract = \"Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.\",\n}\n", "homepage": "https://github.com/dair-ai/emotion_dataset", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 6, "names": ["sadness", "joy", "love", "anger", "fear", "surprise"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "label"}, "task_templates": [{"task": "text-classification", "text_column": "text", "label_column": "label", "labels": ["anger", "fear", "joy", "love", "sadness", "surprise"]}], "builder_name": "emotion", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1741541, "num_examples": 16000, "dataset_name": "emotion"}, "validation": {"name": "validation", "num_bytes": 214699, "num_examples": 2000, "dataset_name": "emotion"}, "test": {"name": "test", "num_bytes": 217177, "num_examples": 2000, "dataset_name": "emotion"}}, "download_checksums": {"https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1": {"num_bytes": 1658616, "checksum": "3ab03d945a6cb783d818ccd06dafd52d2ed8b4f62f0f85a09d7d11870865b190"}, "https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1": {"num_bytes": 204240, "checksum": "34faaa31962fe63cdf5dbf6c132ef8ab166c640254ab991af78f3aea375e79ef"}, "https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1": {"num_bytes": 206760, "checksum": "60f531690d20127339e7f054edc299a82c627b5ec0dd5d552d53d544e0cfcc17"}}, "download_size": 2069616, "post_processing_size": null, "dataset_size": 2173417, "size_in_bytes": 4243033}}
emotion.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ import datasets
4
+ from datasets.tasks import TextClassification
5
+
6
+
7
+ _CITATION = """\
8
+ @inproceedings{saravia-etal-2018-carer,
9
+ title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
10
+ author = "Saravia, Elvis and
11
+ Liu, Hsien-Chi Toby and
12
+ Huang, Yen-Hao and
13
+ Wu, Junlin and
14
+ Chen, Yi-Shin",
15
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
16
+ month = oct # "-" # nov,
17
+ year = "2018",
18
+ address = "Brussels, Belgium",
19
+ publisher = "Association for Computational Linguistics",
20
+ url = "https://www.aclweb.org/anthology/D18-1404",
21
+ doi = "10.18653/v1/D18-1404",
22
+ pages = "3687--3697",
23
+ abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
24
+ }
25
+ """
26
+
27
+ _DESCRIPTION = """\
28
+ Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
29
+ """
30
+
31
+ _HOMEPAGE = "https://github.com/dair-ai/emotion_dataset"
32
+
33
+ _LICENSE = "The dataset should be used for educational and research purposes only"
34
+
35
+ _URLS = {
36
+ "split": {
37
+ "train": "data/train.jsonl.gz",
38
+ "validation": "data/validation.jsonl.gz",
39
+ "test": "data/test.jsonl.gz",
40
+ },
41
+ "unsplit": {
42
+ "train": "data/data.jsonl.gz",
43
+ },
44
+ }
45
+
46
+
47
+ class Emotion(datasets.GeneratorBasedBuilder):
48
+ VERSION = datasets.Version("1.0.0")
49
+ BUILDER_CONFIGS = [
50
+ datasets.BuilderConfig(
51
+ name="split", version=VERSION, description="Dataset split in train, validation and test"
52
+ ),
53
+ datasets.BuilderConfig(name="unsplit", version=VERSION, description="Unsplit dataset"),
54
+ ]
55
+ DEFAULT_CONFIG_NAME = "split"
56
+
57
+ def _info(self):
58
+ class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=datasets.Features(
62
+ {"text": datasets.Value("string"), "label": datasets.ClassLabel(names=class_names)}
63
+ ),
64
+ supervised_keys=("text", "label"),
65
+ homepage=_HOMEPAGE,
66
+ citation=_CITATION,
67
+ license=_LICENSE,
68
+ task_templates=[TextClassification(text_column="text", label_column="label")],
69
+ )
70
+
71
+ def _split_generators(self, dl_manager):
72
+ """Returns SplitGenerators."""
73
+ paths = dl_manager.download_and_extract(_URLS[self.config.name])
74
+ if self.config.name == "split":
75
+ return [
76
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": paths["train"]}),
77
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": paths["validation"]}),
78
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": paths["test"]}),
79
+ ]
80
+ else:
81
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": paths["train"]})]
82
+
83
+ def _generate_examples(self, filepath):
84
+ """Generate examples."""
85
+ with open(filepath, encoding="utf-8") as f:
86
+ for idx, line in enumerate(f):
87
+ example = json.loads(line)
88
+ yield idx, example