system HF staff commited on
Commit
f07f89f
0 Parent(s):

Update files from the datasets library (from 1.12.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.12.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ru
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: The Corpus for Emotions Detecting in Russian-language text sentences (CEDR)
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - text-classification
19
+ task_ids:
20
+ - sentiment-classification
21
+ - multi-label-classification
22
+ - text-classification-other-emotion-classification
23
+ ---
24
+
25
+ # Dataset Card for [cedr]
26
+
27
+ ## Table of Contents
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+ - [Contributions](#contributions)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [GitHub](https://github.com/sag111/CEDR)
55
+ - **Repository:** [GitHub](https://github.com/sag111/CEDR)
56
+ - **Paper:** [ScienceDirect](https://www.sciencedirect.com/science/article/pii/S1877050921013247)
57
+ - **Leaderboard:**
58
+ - **Point of Contact:** [@sag111](mailto:sag111@mail.ru)
59
+
60
+ ### Dataset Summary
61
+
62
+ The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger).
63
+
64
+ Here are 2 dataset configurations:
65
+ - "main" - contains "text", "labels", and "source" features;
66
+ - "enriched" - includes all "main" features and "sentences".
67
+
68
+ Dataset with predefined train/test splits.
69
+
70
+ ### Supported Tasks and Leaderboards
71
+
72
+ This dataset is intended for multi-label emotion classification.
73
+
74
+ ### Languages
75
+
76
+ The data is in Russian.
77
+
78
+ ## Dataset Structure
79
+
80
+ ### Data Instances
81
+
82
+ Each instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all).
83
+
84
+ An example for an instance from the dataset is shown below:
85
+ ```
86
+ {
87
+ 'text': 'Забавно как люди в возрасте удивляются входящим звонкам на мобильник)',
88
+ 'labels': [0],
89
+ 'source': 'twitter',
90
+ 'sentences': [
91
+ [
92
+ {'forma': 'Забавно', 'lemma': 'Забавно'},
93
+ {'forma': 'как', 'lemma': 'как'},
94
+ {'forma': 'люди', 'lemma': 'человек'},
95
+ {'forma': 'в', 'lemma': 'в'},
96
+ {'forma': 'возрасте', 'lemma': 'возраст'},
97
+ {'forma': 'удивляются', 'lemma': 'удивляться'},
98
+ {'forma': 'входящим', 'lemma': 'входить'},
99
+ {'forma': 'звонкам', 'lemma': 'звонок'},
100
+ {'forma': 'на', 'lemma': 'на'},
101
+ {'forma': 'мобильник', 'lemma': 'мобильник'},
102
+ {'forma': ')', 'lemma': ')'}
103
+ ]
104
+ ]
105
+ }
106
+ ```
107
+
108
+ Emotion label codes: {0: "joy", 1: "sadness", 2: "surprise", 3: "fear", 4: "anger"}
109
+
110
+ ### Data Fields
111
+
112
+ The main configuration includes:
113
+ - text: the text of the sentence;
114
+ - labels: the emotion annotations;
115
+ - source: the tag name of the corresponding source
116
+
117
+ In addition to the above, the raw data includes:
118
+ - sentences: text tokenized and lemmatized with [udpipe](https://ufal.mff.cuni.cz/udpipe)
119
+ - 'forma': the original word form;
120
+ - 'lemma': the lemma of this word
121
+
122
+ ### Data Splits
123
+
124
+ The dataset includes a set of train/test splits.
125
+ with 7528, and 1882 examples respectively.
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ The formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers.
132
+
133
+ ### Source Data
134
+
135
+ #### Initial Data Collection and Normalization
136
+
137
+ Data was collected from several sources: posts of the Live Journal social network, texts of the online news agency Lenta.ru, and Twitter microblog posts.
138
+
139
+ Only those sentences were selected that contained marker words from the dictionary of [the emotive vocabulary of the Russian language](http://lexrus.ru/default.aspx?p=2876). The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary.
140
+
141
+ In total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling.
142
+
143
+ #### Who are the source language producers?
144
+
145
+ Russian-speaking LiveJournal and Tweeter users, and authors of news articles on the site lenta.ru.
146
+
147
+ ### Annotations
148
+
149
+ #### Annotation process
150
+
151
+ Annotating sentences with labels of their emotions was performed with the help of [a crowdsourcing platform](https://yandex.ru/support/toloka/index.html?lang=en).
152
+
153
+ The annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: "joy", "sadness", "anger", "fear", and "surprise".
154
+
155
+ If the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed.
156
+
157
+ Sentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators.
158
+
159
+ #### Who are the annotators?
160
+
161
+ Only those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves.
162
+
163
+ ### Personal and Sensitive Information
164
+
165
+ The text of the sentences may contain profanity.
166
+
167
+ ## Considerations for Using the Data
168
+
169
+ ### Social Impact of Dataset
170
+
171
+ [More Information Needed]
172
+
173
+ ### Discussion of Biases
174
+
175
+ [More Information Needed]
176
+
177
+ ### Other Known Limitations
178
+
179
+ [More Information Needed]
180
+
181
+ ## Additional Information
182
+
183
+ ### Dataset Curators
184
+
185
+ Researchers at AI technology lab at NRC "Kurchatov Institute". See the author [list](https://www.sciencedirect.com/science/article/pii/S1877050921013247).
186
+
187
+ ### Licensing Information
188
+
189
+ The GitHub repository which houses this dataset has an Apache License 2.0.
190
+
191
+ ### Citation Information
192
+ If you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here:
193
+ ```
194
+ @article{sboev2021data,
195
+ title={Data-Driven Model for Emotion Detection in Russian Texts},
196
+ author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman},
197
+ journal={Procedia Computer Science},
198
+ volume={190},
199
+ pages={637--642},
200
+ year={2021},
201
+ publisher={Elsevier}
202
+ }
203
+ ```
204
+
205
+ ### Contributions
206
+
207
+ Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset.
cedr.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """CEDR dataset"""
18
+
19
+ import json
20
+ import os
21
+
22
+ import datasets
23
+
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @article{sboev2021data,
29
+ title={Data-Driven Model for Emotion Detection in Russian Texts},
30
+ author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman},
31
+ journal={Procedia Computer Science},
32
+ volume={190},
33
+ pages={637--642},
34
+ year={2021},
35
+ publisher={Elsevier}
36
+ }
37
+ """
38
+
39
+ _LICENSE = """http://www.apache.org/licenses/LICENSE-2.0"""
40
+
41
+ # TODO: Add description of the dataset here
42
+ # You can copy an official description
43
+ _DESCRIPTION = """\
44
+ This new dataset is designed to solve emotion recognition task for text data in Russian. The Corpus for Emotions Detecting in
45
+ Russian-language text sentences of different social sources (CEDR) contains 9410 sentences in Russian labeled for 5 emotion
46
+ categories. The data collected from different sources: posts of the LiveJournal social network, texts of the online news
47
+ agency Lenta.ru, and Twitter microblog posts. There are two variants of the corpus: main and enriched. The enriched variant
48
+ is include tokenization and lemmatization. Dataset with predefined train/test splits.
49
+ """
50
+
51
+ # TODO: Add a link to an official homepage for the dataset here
52
+ _HOMEPAGE = "https://github.com/sag111/CEDR"
53
+
54
+ # TODO: Add link to the official dataset URLs here
55
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
56
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
57
+ _URLs = {
58
+ "main": "https://sagteam.ru/cedr/main.zip",
59
+ "enriched": "https://sagteam.ru/cedr/enriched.zip",
60
+ }
61
+
62
+
63
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
64
+ class Cedr(datasets.GeneratorBasedBuilder):
65
+ """This dataset is designed to solve emotion recognition task for text data in Russian."""
66
+
67
+ VERSION = datasets.Version("0.1.1")
68
+
69
+ # This is an example of a dataset with multiple configurations.
70
+ # If you don't want/need to define several sub-sets in your dataset,
71
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
72
+
73
+ # If you need to make complex sub-parts in the datasets with configurable options
74
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
75
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
76
+
77
+ # You will be able to load one or the other configurations in the following list with
78
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
79
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
80
+ BUILDER_CONFIGS = [
81
+ datasets.BuilderConfig(
82
+ name="main", version=VERSION, description="This part of CEDR dataset covers a main version"
83
+ ),
84
+ datasets.BuilderConfig(
85
+ name="enriched", version=VERSION, description="This part of CEDR dataset covers a enriched version"
86
+ ),
87
+ ]
88
+
89
+ DEFAULT_CONFIG_NAME = "main" # It's not mandatory to have a default configuration. Just use one if it make sense.
90
+
91
+ def _info(self):
92
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
93
+ if self.config.name == "main": # This is the name of the configuration selected in BUILDER_CONFIGS above
94
+ features = datasets.Features(
95
+ {
96
+ "text": datasets.Value("string"),
97
+ "labels": datasets.features.Sequence(
98
+ datasets.ClassLabel(names=["joy", "sadness", "surprise", "fear", "anger"])
99
+ ),
100
+ "source": datasets.Value("string"),
101
+ # These are the features of your dataset like images, labels ...
102
+ }
103
+ )
104
+ else: # This is an example to show how to have different features for "first_domain" and "second_domain"
105
+ features = datasets.Features(
106
+ {
107
+ "text": datasets.Value("string"),
108
+ "labels": datasets.features.Sequence(
109
+ datasets.ClassLabel(names=["joy", "sadness", "surprise", "fear", "anger"])
110
+ ),
111
+ "source": datasets.Value("string"),
112
+ "sentences": [
113
+ [
114
+ {
115
+ "forma": datasets.Value("string"),
116
+ "lemma": datasets.Value("string"),
117
+ }
118
+ ]
119
+ ]
120
+ # These are the features of your dataset like images, labels ...
121
+ }
122
+ )
123
+ return datasets.DatasetInfo(
124
+ # This is the description that will appear on the datasets page.
125
+ description=_DESCRIPTION,
126
+ # This defines the different columns of the dataset and their types
127
+ features=features, # Here we define them above because they are different between the two configurations
128
+ # If there's a common (input, target) tuple from the features,
129
+ # specify them here. They'll be used if as_supervised=True in
130
+ # builder.as_dataset.
131
+ supervised_keys=None,
132
+ # Homepage of the dataset for documentation
133
+ homepage=_HOMEPAGE,
134
+ # License for the dataset if available
135
+ license=_LICENSE,
136
+ # Citation for the dataset
137
+ citation=_CITATION,
138
+ )
139
+
140
+ def _split_generators(self, dl_manager):
141
+ """Returns SplitGenerators."""
142
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
143
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
144
+
145
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
146
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
147
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
148
+ my_urls = _URLs[self.config.name]
149
+ data_dir = dl_manager.download_and_extract(my_urls)
150
+ return [
151
+ datasets.SplitGenerator(
152
+ name=datasets.Split.TRAIN,
153
+ # These kwargs will be passed to _generate_examples
154
+ gen_kwargs={
155
+ "filepath": os.path.join(data_dir, self.config.name, "train.jsonl"),
156
+ "split": "train",
157
+ },
158
+ ),
159
+ datasets.SplitGenerator(
160
+ name=datasets.Split.TEST,
161
+ # These kwargs will be passed to _generate_examples
162
+ gen_kwargs={"filepath": os.path.join(data_dir, self.config.name, "test.jsonl"), "split": "test"},
163
+ ),
164
+ ]
165
+
166
+ def _generate_examples(
167
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
168
+ ):
169
+ """Yields examples as (key, example) tuples."""
170
+ # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
171
+ # The `key` is here for legacy reason (tfds) and is not important in itself.
172
+
173
+ with open(filepath, encoding="utf-8") as f:
174
+ for id_, row in enumerate(f):
175
+ data = json.loads(row)
176
+ if self.config.name == "main":
177
+ yield id_, {
178
+ "text": data["text"],
179
+ "source": data["source"],
180
+ "labels": data["labels"],
181
+ }
182
+ else:
183
+ yield id_, {
184
+ "text": data["text"],
185
+ "source": data["source"],
186
+ "sentences": data["sentences"],
187
+ "labels": data["labels"],
188
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"main": {"description": "This new dataset is designed to solve emotion recognition task for text data in Russian. The Corpus for Emotions Detecting in\nRussian-language text sentences of different social sources (CEDR) contains 9410 sentences in Russian labeled for 5 emotion\ncategories. The data collected from different sources: posts of the LiveJournal social network, texts of the online news\nagency Lenta.ru, and Twitter microblog posts. There are two variants of the corpus: main and enriched. The enriched variant\nis include tokenization and lemmatization. Dataset with predefined train/test splits.\n", "citation": "@article{sboev2021data,\n title={Data-Driven Model for Emotion Detection in Russian Texts},\n author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman},\n journal={Procedia Computer Science},\n volume={190},\n pages={637--642},\n year={2021},\n publisher={Elsevier}\n}\n", "homepage": "https://github.com/sag111/CEDR", "license": "http://www.apache.org/licenses/LICENSE-2.0", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "labels": {"feature": {"num_classes": 5, "names": ["joy", "sadness", "surprise", "fear", "anger"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cedr", "config_name": "main", "version": {"version_str": "0.1.1", "description": null, "major": 0, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 1418355, "num_examples": 7528, "dataset_name": "cedr"}, "test": {"name": "test", "num_bytes": 350275, "num_examples": 1882, "dataset_name": "cedr"}}, "download_checksums": {"https://sagteam.ru/cedr/main.zip": {"num_bytes": 693026, "checksum": "d81e6d19679a903773b8776c4c0f68755d55596e6b34866fbaa9d39d2e385bd3"}}, "download_size": 693026, "post_processing_size": null, "dataset_size": 1768630, "size_in_bytes": 2461656}, "enriched": {"description": "This new dataset is designed to solve emotion recognition task for text data in Russian. The Corpus for Emotions Detecting in\nRussian-language text sentences of different social sources (CEDR) contains 9410 sentences in Russian labeled for 5 emotion\ncategories. The data collected from different sources: posts of the LiveJournal social network, texts of the online news\nagency Lenta.ru, and Twitter microblog posts. There are two variants of the corpus: main and enriched. The enriched variant\nis include tokenization and lemmatization. Dataset with predefined train/test splits.\n", "citation": "@article{sboev2021data,\n title={Data-Driven Model for Emotion Detection in Russian Texts},\n author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman},\n journal={Procedia Computer Science},\n volume={190},\n pages={637--642},\n year={2021},\n publisher={Elsevier}\n}\n", "homepage": "https://github.com/sag111/CEDR", "license": "http://www.apache.org/licenses/LICENSE-2.0", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "labels": {"feature": {"num_classes": 5, "names": ["joy", "sadness", "surprise", "fear", "anger"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "source": {"dtype": "string", "id": null, "_type": "Value"}, "sentences": [[{"forma": {"dtype": "string", "id": null, "_type": "Value"}, "lemma": {"dtype": "string", "id": null, "_type": "Value"}}]]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "cedr", "config_name": "enriched", "version": {"version_str": "0.1.1", "description": null, "major": 0, "minor": 1, "patch": 1}, "splits": {"train": {"name": "train", "num_bytes": 4792366, "num_examples": 7528, "dataset_name": "cedr"}, "test": {"name": "test", "num_bytes": 1182343, "num_examples": 1882, "dataset_name": "cedr"}}, "download_checksums": {"https://sagteam.ru/cedr/enriched.zip": {"num_bytes": 1822522, "checksum": "3b0ee43108ca6a52ce21037d35c99538a4a80e9dba5bd3d02b3ff17d4d89b2b7"}}, "download_size": 1822522, "post_processing_size": null, "dataset_size": 5974709, "size_in_bytes": 7797231}}
dummy/enriched/0.1.1/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dae7e2821b8a898d85f0087cf714baec01e07fb75425c31eb719c8a3e4fe2d27
3
+ size 3532
dummy/main/0.1.1/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c857e3ad3e02920f6d64dc0cf583be9d2bf1d634db2bb7d31ed9011df9dd7210
3
+ size 1946