philschmid HF staff parquet-converter commited on
Commit
32e8eb6
0 Parent(s):

Duplicate from emotion

Browse files

Co-authored-by: francky <francky@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,279 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: Emotion
3
+ annotations_creators:
4
+ - machine-generated
5
+ language_creators:
6
+ - machine-generated
7
+ language:
8
+ - en
9
+ license:
10
+ - other
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - multi-class-classification
21
+ paperswithcode_id: emotion
22
+ train-eval-index:
23
+ - config: default
24
+ task: text-classification
25
+ task_id: multi_class_classification
26
+ splits:
27
+ train_split: train
28
+ eval_split: test
29
+ col_mapping:
30
+ text: text
31
+ label: target
32
+ metrics:
33
+ - type: accuracy
34
+ name: Accuracy
35
+ - type: f1
36
+ name: F1 macro
37
+ args:
38
+ average: macro
39
+ - type: f1
40
+ name: F1 micro
41
+ args:
42
+ average: micro
43
+ - type: f1
44
+ name: F1 weighted
45
+ args:
46
+ average: weighted
47
+ - type: precision
48
+ name: Precision macro
49
+ args:
50
+ average: macro
51
+ - type: precision
52
+ name: Precision micro
53
+ args:
54
+ average: micro
55
+ - type: precision
56
+ name: Precision weighted
57
+ args:
58
+ average: weighted
59
+ - type: recall
60
+ name: Recall macro
61
+ args:
62
+ average: macro
63
+ - type: recall
64
+ name: Recall micro
65
+ args:
66
+ average: micro
67
+ - type: recall
68
+ name: Recall weighted
69
+ args:
70
+ average: weighted
71
+ tags:
72
+ - emotion-classification
73
+ dataset_info:
74
+ - config_name: split
75
+ features:
76
+ - name: text
77
+ dtype: string
78
+ - name: label
79
+ dtype:
80
+ class_label:
81
+ names:
82
+ '0': sadness
83
+ '1': joy
84
+ '2': love
85
+ '3': anger
86
+ '4': fear
87
+ '5': surprise
88
+ splits:
89
+ - name: train
90
+ num_bytes: 1741597
91
+ num_examples: 16000
92
+ - name: validation
93
+ num_bytes: 214703
94
+ num_examples: 2000
95
+ - name: test
96
+ num_bytes: 217181
97
+ num_examples: 2000
98
+ download_size: 740883
99
+ dataset_size: 2173481
100
+ - config_name: unsplit
101
+ features:
102
+ - name: text
103
+ dtype: string
104
+ - name: label
105
+ dtype:
106
+ class_label:
107
+ names:
108
+ '0': sadness
109
+ '1': joy
110
+ '2': love
111
+ '3': anger
112
+ '4': fear
113
+ '5': surprise
114
+ splits:
115
+ - name: train
116
+ num_bytes: 45445685
117
+ num_examples: 416809
118
+ download_size: 15388281
119
+ dataset_size: 45445685
120
+ duplicated_from: emotion
121
+ ---
122
+
123
+ # Dataset Card for "emotion"
124
+
125
+ ## Table of Contents
126
+ - [Dataset Description](#dataset-description)
127
+ - [Dataset Summary](#dataset-summary)
128
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
129
+ - [Languages](#languages)
130
+ - [Dataset Structure](#dataset-structure)
131
+ - [Data Instances](#data-instances)
132
+ - [Data Fields](#data-fields)
133
+ - [Data Splits](#data-splits)
134
+ - [Dataset Creation](#dataset-creation)
135
+ - [Curation Rationale](#curation-rationale)
136
+ - [Source Data](#source-data)
137
+ - [Annotations](#annotations)
138
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
139
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
140
+ - [Social Impact of Dataset](#social-impact-of-dataset)
141
+ - [Discussion of Biases](#discussion-of-biases)
142
+ - [Other Known Limitations](#other-known-limitations)
143
+ - [Additional Information](#additional-information)
144
+ - [Dataset Curators](#dataset-curators)
145
+ - [Licensing Information](#licensing-information)
146
+ - [Citation Information](#citation-information)
147
+ - [Contributions](#contributions)
148
+
149
+ ## Dataset Description
150
+
151
+ - **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
152
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
154
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
155
+ - **Size of downloaded dataset files:** 3.95 MB
156
+ - **Size of the generated dataset:** 4.16 MB
157
+ - **Total amount of disk used:** 8.11 MB
158
+
159
+ ### Dataset Summary
160
+
161
+ Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
162
+
163
+ ### Supported Tasks and Leaderboards
164
+
165
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
+
167
+ ### Languages
168
+
169
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
+
171
+ ## Dataset Structure
172
+
173
+ ### Data Instances
174
+
175
+ An example looks as follows.
176
+ ```
177
+ {
178
+ "text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
179
+ "label": 0
180
+ }
181
+ ```
182
+
183
+ ### Data Fields
184
+
185
+ The data fields are:
186
+ - `text`: a `string` feature.
187
+ - `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
188
+
189
+ ### Data Splits
190
+
191
+ The dataset has 2 configurations:
192
+ - split: with a total of 20_000 examples split into train, validation and split
193
+ - unsplit: with a total of 416_809 examples in a single train split
194
+
195
+ | name | train | validation | test |
196
+ |---------|-------:|-----------:|-----:|
197
+ | split | 16000 | 2000 | 2000 |
198
+ | unsplit | 416809 | n/a | n/a |
199
+
200
+ ## Dataset Creation
201
+
202
+ ### Curation Rationale
203
+
204
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
205
+
206
+ ### Source Data
207
+
208
+ #### Initial Data Collection and Normalization
209
+
210
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
211
+
212
+ #### Who are the source language producers?
213
+
214
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
215
+
216
+ ### Annotations
217
+
218
+ #### Annotation process
219
+
220
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
221
+
222
+ #### Who are the annotators?
223
+
224
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
225
+
226
+ ### Personal and Sensitive Information
227
+
228
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
229
+
230
+ ## Considerations for Using the Data
231
+
232
+ ### Social Impact of Dataset
233
+
234
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
235
+
236
+ ### Discussion of Biases
237
+
238
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
239
+
240
+ ### Other Known Limitations
241
+
242
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
243
+
244
+ ## Additional Information
245
+
246
+ ### Dataset Curators
247
+
248
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
249
+
250
+ ### Licensing Information
251
+
252
+ The dataset should be used for educational and research purposes only.
253
+
254
+ ### Citation Information
255
+
256
+ If you use this dataset, please cite:
257
+ ```
258
+ @inproceedings{saravia-etal-2018-carer,
259
+ title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
260
+ author = "Saravia, Elvis and
261
+ Liu, Hsien-Chi Toby and
262
+ Huang, Yen-Hao and
263
+ Wu, Junlin and
264
+ Chen, Yi-Shin",
265
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
266
+ month = oct # "-" # nov,
267
+ year = "2018",
268
+ address = "Brussels, Belgium",
269
+ publisher = "Association for Computational Linguistics",
270
+ url = "https://www.aclweb.org/anthology/D18-1404",
271
+ doi = "10.18653/v1/D18-1404",
272
+ pages = "3687--3697",
273
+ abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
274
+ }
275
+ ```
276
+
277
+ ### Contributions
278
+
279
+ Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
data/data.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8944e6b35cb42294769ac30cf17bd006231545b2eeecfa59324246e192564d1f
3
+ size 15388281
data/test.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4524468d0b7ee8eab07a088216cde7f9278f1c574669504a805ed172df6dad75
3
+ size 74935
data/train.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:757a0a73f1483f4b3f94783b774cdbf0831722a2b2c9abb5b820b4614ff6882a
3
+ size 591930
data/validation.jsonl.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50783464882f450f88e61ece964a200e492495eed1472ed520d013bbcd3049be
3
+ size 74018
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.\n", "citation": "@inproceedings{saravia-etal-2018-carer,\n title = \"{CARER}: Contextualized Affect Representations for Emotion Recognition\",\n author = \"Saravia, Elvis and\n Liu, Hsien-Chi Toby and\n Huang, Yen-Hao and\n Wu, Junlin and\n Chen, Yi-Shin\",\n booktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing\",\n month = oct # \"-\" # nov,\n year = \"2018\",\n address = \"Brussels, Belgium\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/D18-1404\",\n doi = \"10.18653/v1/D18-1404\",\n pages = \"3687--3697\",\n abstract = \"Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.\",\n}\n", "homepage": "https://github.com/dair-ai/emotion_dataset", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 6, "names": ["sadness", "joy", "love", "anger", "fear", "surprise"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "text", "output": "label"}, "task_templates": [{"task": "text-classification", "text_column": "text", "label_column": "label", "labels": ["anger", "fear", "joy", "love", "sadness", "surprise"]}], "builder_name": "emotion", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1741541, "num_examples": 16000, "dataset_name": "emotion"}, "validation": {"name": "validation", "num_bytes": 214699, "num_examples": 2000, "dataset_name": "emotion"}, "test": {"name": "test", "num_bytes": 217177, "num_examples": 2000, "dataset_name": "emotion"}}, "download_checksums": {"https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1": {"num_bytes": 1658616, "checksum": "3ab03d945a6cb783d818ccd06dafd52d2ed8b4f62f0f85a09d7d11870865b190"}, "https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1": {"num_bytes": 204240, "checksum": "34faaa31962fe63cdf5dbf6c132ef8ab166c640254ab991af78f3aea375e79ef"}, "https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1": {"num_bytes": 206760, "checksum": "60f531690d20127339e7f054edc299a82c627b5ec0dd5d552d53d544e0cfcc17"}}, "download_size": 2069616, "post_processing_size": null, "dataset_size": 2173417, "size_in_bytes": 4243033}}
emotion.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ import datasets
4
+ from datasets.tasks import TextClassification
5
+
6
+
7
+ _CITATION = """\
8
+ @inproceedings{saravia-etal-2018-carer,
9
+ title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
10
+ author = "Saravia, Elvis and
11
+ Liu, Hsien-Chi Toby and
12
+ Huang, Yen-Hao and
13
+ Wu, Junlin and
14
+ Chen, Yi-Shin",
15
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
16
+ month = oct # "-" # nov,
17
+ year = "2018",
18
+ address = "Brussels, Belgium",
19
+ publisher = "Association for Computational Linguistics",
20
+ url = "https://www.aclweb.org/anthology/D18-1404",
21
+ doi = "10.18653/v1/D18-1404",
22
+ pages = "3687--3697",
23
+ abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
24
+ }
25
+ """
26
+
27
+ _DESCRIPTION = """\
28
+ Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
29
+ """
30
+
31
+ _HOMEPAGE = "https://github.com/dair-ai/emotion_dataset"
32
+
33
+ _LICENSE = "The dataset should be used for educational and research purposes only"
34
+
35
+ _URLS = {
36
+ "split": {
37
+ "train": "data/train.jsonl.gz",
38
+ "validation": "data/validation.jsonl.gz",
39
+ "test": "data/test.jsonl.gz",
40
+ },
41
+ "unsplit": {
42
+ "train": "data/data.jsonl.gz",
43
+ },
44
+ }
45
+
46
+
47
+ class Emotion(datasets.GeneratorBasedBuilder):
48
+ VERSION = datasets.Version("1.0.0")
49
+ BUILDER_CONFIGS = [
50
+ datasets.BuilderConfig(
51
+ name="split", version=VERSION, description="Dataset split in train, validation and test"
52
+ ),
53
+ datasets.BuilderConfig(name="unsplit", version=VERSION, description="Unsplit dataset"),
54
+ ]
55
+ DEFAULT_CONFIG_NAME = "split"
56
+
57
+ def _info(self):
58
+ class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"]
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=datasets.Features(
62
+ {"text": datasets.Value("string"), "label": datasets.ClassLabel(names=class_names)}
63
+ ),
64
+ supervised_keys=("text", "label"),
65
+ homepage=_HOMEPAGE,
66
+ citation=_CITATION,
67
+ license=_LICENSE,
68
+ task_templates=[TextClassification(text_column="text", label_column="label")],
69
+ )
70
+
71
+ def _split_generators(self, dl_manager):
72
+ """Returns SplitGenerators."""
73
+ paths = dl_manager.download_and_extract(_URLS[self.config.name])
74
+ if self.config.name == "split":
75
+ return [
76
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": paths["train"]}),
77
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": paths["validation"]}),
78
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": paths["test"]}),
79
+ ]
80
+ else:
81
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": paths["train"]})]
82
+
83
+ def _generate_examples(self, filepath):
84
+ """Generate examples."""
85
+ with open(filepath, encoding="utf-8") as f:
86
+ for idx, line in enumerate(f):
87
+ example = json.loads(line)
88
+ yield idx, example