Datasets:

Languages:
Romanian
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
7fc1052
0 Parent(s):

Update files from the datasets library (from 1.5.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.5.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ro
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - text-classification
18
+ task_ids:
19
+ - sentiment-classification
20
+ ---
21
+
22
+ # Dataset Card for RoSent
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for RoSent](#dataset-card-for-ro_sent)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+ - [Contributions](#contributions)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** [GitHub](https://github.com/dumitrescustefan/Romanian-Transformers/tree/examples/examples/sentiment_analysis)
57
+ - **Repository:** [GitHub](https://github.com/dumitrescustefan/Romanian-Transformers/tree/examples/examples/sentiment_analysis)
58
+ - **Paper:** [arXiv preprint](https://arxiv.org/pdf/2009.08712.pdf)
59
+ - **Leaderboard:**
60
+ - **Point of Contact:**
61
+
62
+ ### Dataset Summary
63
+
64
+ This dataset is a Romanian Sentiment Analysis dataset. It is present in a processed form, as used by the authors of [`Romanian Transformers`](https://github.com/dumitrescustefan/Romanian-Transformers) in their examples and based on the original data present in at [this GitHub repository](https://github.com/katakonst/sentiment-analysis-tensorflow). The original data contains product and movie reviews in Romanian.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed]
69
+
70
+ ### Languages
71
+
72
+ This dataset is present in Romanian language.
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ An instance from the `train` split:
79
+ ```
80
+ {'id': '0', 'label': 1, 'original_id': '0', 'sentence': 'acest document mi-a deschis cu adevarat ochii la ceea ce oamenii din afara statelor unite s-au gandit la atacurile din 11 septembrie. acest film a fost construit in mod expert si prezinta acest dezastru ca fiind mai mult decat un atac asupra pamantului american. urmarile acestui dezastru sunt previzionate din multe tari si perspective diferite. cred ca acest film ar trebui sa fie mai bine distribuit pentru acest punct. de asemenea, el ajuta in procesul de vindecare sa vada in cele din urma altceva decat stirile despre atacurile teroriste. si unele dintre piese sunt de fapt amuzante, dar nu abuziv asa. acest film a fost extrem de recomandat pentru mine, si am trecut pe acelasi sentiment.'}
81
+ ```
82
+
83
+ ### Data Fields
84
+
85
+ - `original_id`: a `string` feature containing the original id from the file.
86
+ - `id`: a `string` feature .
87
+ - `sentence`: a `string` feature.
88
+ - `label`: a classification label, with possible values including `negative` (0), `positive` (1).
89
+
90
+ ### Data Splits
91
+
92
+ This dataset has two splits: `train` with 17941 examples, and `test` with 11005 examples.
93
+
94
+ ## Dataset Creation
95
+
96
+ ### Curation Rationale
97
+
98
+ [More Information Needed]
99
+
100
+ ### Source Data
101
+
102
+ #### Initial Data Collection and Normalization
103
+
104
+ The source dataset is present at the [this GitHub repository](https://github.com/katakonst/sentiment-analysis-tensorflow) and is based on product and movie reviews. The original source is unknown.
105
+
106
+ #### Who are the source language producers?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Annotations
111
+
112
+ #### Annotation process
113
+
114
+ [More Information Needed]
115
+
116
+ #### Who are the annotators?
117
+
118
+ [More Information Needed]
119
+
120
+ ### Personal and Sensitive Information
121
+
122
+ [More Information Needed]
123
+
124
+ ## Considerations for Using the Data
125
+
126
+ ### Social Impact of Dataset
127
+
128
+ [More Information Needed]
129
+
130
+ ### Discussion of Biases
131
+
132
+ [More Information Needed]
133
+
134
+ ### Other Known Limitations
135
+
136
+ [More Information Needed]
137
+
138
+ ## Additional Information
139
+
140
+ ### Dataset Curators
141
+
142
+ Stefan Daniel Dumitrescu, Andrei-Marious Avram, Sampo Pyysalo, [@katakonst](https://github.com/katakonst)
143
+
144
+ ### Licensing Information
145
+
146
+ [More Information Needed]
147
+
148
+ ### Citation Information
149
+
150
+ ```
151
+ @article{dumitrescu2020birth,
152
+ title={The birth of Romanian BERT},
153
+ author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius and Pyysalo, Sampo},
154
+ journal={arXiv preprint arXiv:2009.08712},
155
+ year={2020}
156
+ }
157
+ ```
158
+
159
+ ### Contributions
160
+
161
+ Thanks to [@gchhablani](https://github.com/gchhablani) and [@iliemihai](https://github.com/iliemihai) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "This dataset is a Romanian Sentiment Analysis dataset.\nIt is present in a processed form, as used by the authors of `Romanian Transformers`\nin their examples and based on the original data present in\n`https://github.com/katakonst/sentiment-analysis-tensorflow`.\n", "citation": "\n@article{dumitrescu2020birth,\n title={The birth of Romanian BERT},\n author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius and Pyysalo, Sampo},\n journal={arXiv preprint arXiv:2009.08712},\n year={2020}\n}\n", "homepage": "https://github.com/dumitrescustefan/Romanian-Transformers/tree/examples/examples/sentiment_analysis", "license": "", "features": {"original_id": {"dtype": "string", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["negative", "positive"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "sentence", "output": "label"}, "builder_name": "ro_sent", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8367687, "num_examples": 17941, "dataset_name": "ro_sent"}, "test": {"name": "test", "num_bytes": 6837430, "num_examples": 11005, "dataset_name": "ro_sent"}}, "download_checksums": {"https://raw.githubusercontent.com/dumitrescustefan/Romanian-Transformers/examples/examples/sentiment_analysis/ro/train.csv": {"num_bytes": 8048544, "checksum": "5b5f36aba3895e75832d1f084459f23ebeec0418d55ab1fbaa015d154879ed0f"}, "https://raw.githubusercontent.com/dumitrescustefan/Romanian-Transformers/examples/examples/sentiment_analysis/ro/test.csv": {"num_bytes": 6651513, "checksum": "2491ca36849e7055f1497575fa691f91671d64b7365f43a3be84ce552b6b65bd"}}, "download_size": 14700057, "post_processing_size": null, "dataset_size": 15205117, "size_in_bytes": 29905174}}
dummy/default/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:516253ed4c61649e4d72f55ee218d805d2df1fa65fc340f114720ca5e8f621bc
3
+ size 4654
ro_sent.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Introduction in a Romanian sentiment dataset."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+
21
+ import datasets
22
+ from datasets.builder import BuilderConfig
23
+
24
+
25
+ # Find for instance the citation on arxiv or on the dataset repo/website
26
+ _CITATION = """
27
+ @article{dumitrescu2020birth,
28
+ title={The birth of Romanian BERT},
29
+ author={Dumitrescu, Stefan Daniel and Avram, Andrei-Marius and Pyysalo, Sampo},
30
+ journal={arXiv preprint arXiv:2009.08712},
31
+ year={2020}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ This dataset is a Romanian Sentiment Analysis dataset.
37
+ It is present in a processed form, as used by the authors of `Romanian Transformers`
38
+ in their examples and based on the original data present in
39
+ `https://github.com/katakonst/sentiment-analysis-tensorflow`. The original dataset is collected
40
+ from product and movie reviews in Romanian.
41
+ """
42
+
43
+ _HOMEPAGE = "https://github.com/dumitrescustefan/Romanian-Transformers/tree/examples/examples/sentiment_analysis"
44
+
45
+ _LICENSE = ""
46
+
47
+ _URL = (
48
+ "https://raw.githubusercontent.com/dumitrescustefan/Romanian-Transformers/examples/examples/sentiment_analysis/ro/"
49
+ )
50
+ _URLs = {"train": _URL + "train.csv", "test": _URL + "test.csv"}
51
+
52
+
53
+ class RoSent(datasets.GeneratorBasedBuilder):
54
+ """Romanian Sentiment Analysis dataset."""
55
+
56
+ VERSION = datasets.Version("1.0.0")
57
+
58
+ BUILDER_CONFIGS = [
59
+ BuilderConfig(
60
+ name="default",
61
+ version=VERSION,
62
+ description="This configuration handles all of Romanian Sentiment Analysis dataset.",
63
+ ),
64
+ ]
65
+
66
+ def _info(self):
67
+
68
+ features = datasets.Features(
69
+ {
70
+ "original_id": datasets.Value("string"),
71
+ "id": datasets.Value("string"),
72
+ "sentence": datasets.Value("string"),
73
+ "label": datasets.ClassLabel(names=["negative", "positive"]), # 0 is negative
74
+ }
75
+ )
76
+
77
+ return datasets.DatasetInfo(
78
+ # This is the description that will appear on the datasets page.
79
+ description=_DESCRIPTION,
80
+ # This defines the different columns of the dataset and their types
81
+ features=features, # Here we define them above because they are different between the two configurations
82
+ # If there's a common (input, target) tuple from the features,
83
+ # specify them here. They'll be used if as_supervised=True in
84
+ # builder.as_dataset.
85
+ supervised_keys=("sentence", "label"),
86
+ # Homepage of the dataset for documentation
87
+ homepage=_HOMEPAGE,
88
+ # License for the dataset if available
89
+ license=_LICENSE,
90
+ # Citation for the dataset
91
+ citation=_CITATION,
92
+ )
93
+
94
+ def _split_generators(self, dl_manager):
95
+ """Returns SplitGenerators."""
96
+
97
+ paths = dl_manager.download(_URLs)
98
+
99
+ return [
100
+ datasets.SplitGenerator(
101
+ name=datasets.Split.TRAIN,
102
+ # These kwargs will be passed to _generate_examples
103
+ gen_kwargs={"filepath": paths["train"]},
104
+ ),
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.TEST,
107
+ # These kwargs will be passed to _generate_examples
108
+ gen_kwargs={"filepath": paths["test"]},
109
+ ),
110
+ ]
111
+
112
+ def _generate_examples(self, filepath):
113
+ """ Yields examples. """
114
+
115
+ with open(filepath, encoding="utf-8") as f:
116
+ data = csv.DictReader(f, delimiter=",", quotechar='"')
117
+
118
+ for row_id, row in enumerate(data):
119
+ yield row_id, {
120
+ "original_id": row["index"] if "index" in row.keys() else row[""], # test has no 'index' key
121
+ "id": str(row_id), # this is needed because indices are repeated in the files.
122
+ "sentence": row["text"],
123
+ "label": int(row["label"]),
124
+ }