danyaljj commited on
Commit
0192a28
1 Parent(s): 74c2559

adding the files

Browse files
Files changed (3) hide show
  1. README.md +183 -0
  2. dataset_infos.json +1 -0
  3. parsinlu_sentiment.py +152 -0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - fa
8
+ licenses:
9
+ - cc-by-nc-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended|translated|mnli
16
+ task_categories:
17
+ - sentiment-analysis
18
+ task_ids:
19
+ - sentiment-analysis
20
+ ---
21
+
22
+ # Dataset Card for PersiNLU (Textual Entailment)
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for PersiNLU (Sentiment Analysis)](#dataset-card-for-persi_sentiment)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+ - [Contributions](#contributions)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
57
+ - **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
58
+ - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
59
+ - **Leaderboard:**
60
+ - **Point of Contact:** d.khashabi@gmail.com
61
+
62
+ ### Dataset Summary
63
+
64
+ A Persian sentiment analysis dataset.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed]
69
+
70
+ ### Languages
71
+
72
+ The text dataset is in Persian (`fa`).
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ Here is an example from the dataset:
79
+ ```json
80
+ {
81
+ "review": "خوب بود ولی خیلی گرون شده دیگه...فک نکنم به این قیمت ارزش خرید داشته باشد",
82
+ "review_id": "1538",
83
+ "example_id": "4",
84
+ "excel_id": "food_194",
85
+ "question": "نظر شما در مورد بسته بندی و نگهداری این حلوا شکری، ارده و کنجد چیست؟",
86
+ "category": "حلوا شکری، ارده و کنجد",
87
+ "aspect": "بسته بندی",
88
+ "label": "-3",
89
+ "guid": "food-dev-r1538-e4"
90
+ }
91
+ ```
92
+
93
+ ### Data Fields
94
+ - `review`: the review text.
95
+ - `review_id`: a unique id associated with the review.
96
+ - `example_id`: a unique id associated with a particular attribute being addressed about the review.
97
+ - `question`: a natural language question about a particular attribute.
98
+ - `category`: the subject discussed in the review.
99
+ - `aspect`: the aspect mentioned in the input question.
100
+ - `label`: the overall sentiment towards this particular subject, in the context of the mentioned aspect. Here are the definition of the labels:
101
+ ```
102
+ '-3': 'no sentiment expressed',
103
+ '-2': 'very negative',
104
+ '-1': 'negative',
105
+ '0': 'neutral',
106
+ '1': 'positive',
107
+ '2': 'very positive',
108
+ '3': 'mixed',
109
+ ```
110
+
111
+
112
+ ### Data Splits
113
+
114
+ See the data.
115
+
116
+ ## Dataset Creation
117
+
118
+ ### Curation Rationale
119
+
120
+ For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
121
+
122
+ ### Source Data
123
+
124
+ #### Initial Data Collection and Normalization
125
+
126
+ [More Information Needed]
127
+
128
+ #### Who are the source language producers?
129
+
130
+ [More Information Needed]
131
+
132
+ ### Annotations
133
+
134
+ #### Annotation process
135
+
136
+ [More Information Needed]
137
+
138
+ #### Who are the annotators?
139
+
140
+ [More Information Needed]
141
+
142
+ ### Personal and Sensitive Information
143
+
144
+ [More Information Needed]
145
+
146
+ ## Considerations for Using the Data
147
+
148
+ ### Social Impact of Dataset
149
+
150
+ [More Information Needed]
151
+
152
+ ### Discussion of Biases
153
+
154
+ [More Information Needed]
155
+
156
+ ### Other Known Limitations
157
+
158
+ [More Information Needed]
159
+
160
+ ## Additional Information
161
+
162
+ ### Dataset Curators
163
+
164
+ [More Information Needed]
165
+
166
+ ### Licensing Information
167
+
168
+ CC BY-NC-SA 4.0 License
169
+
170
+ ### Citation Information
171
+ ```bibtex
172
+ @article{huggingface:dataset,
173
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
174
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
175
+ year={2020}
176
+ journal = {arXiv e-prints},
177
+ eprint = {2012.06154},
178
+ }
179
+ ```
180
+
181
+ ### Contributions
182
+
183
+ Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"parsinlu-repo": {"description": "A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment). \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"review": {"dtype": "string", "id": null, "_type": "Value"}, "review_id": {"dtype": "string", "id": null, "_type": "Value"}, "example_id": {"dtype": "string", "id": null, "_type": "Value"}, "excel_id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"dtype": "string", "id": null, "_type": "Value"}, "aspect": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}, "guid": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5205739, "num_examples": 13617, "dataset_name": "parsinlu_reading_comprehension"}, "test_food": {"name": "test_food", "num_bytes": 480528, "num_examples": 1344, "dataset_name": "parsinlu_reading_comprehension"}, "test_movies": {"name": "test_movies", "num_bytes": 399773, "num_examples": 816, "dataset_name": "parsinlu_reading_comprehension"}, "validation_food": {"name": "validation_food", "num_bytes": 482942, "num_examples": 1330, "dataset_name": "parsinlu_reading_comprehension"}, "validation_movies": {"name": "validation_movies", "num_bytes": 168771, "num_examples": 360, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/sentiment-analysis/ABSA_Dataset_train.jsonl": {"num_bytes": 6580984, "checksum": "06ce2f89c5ce95d271a57857087fe983751c58067c70de414ad9e55c9ffb5716"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/sentiment-analysis/food_dev.jsonl": {"num_bytes": 617236, "checksum": "2d60d49125ebc77cad125bf0cb3dd737909575bc55dbc69f71df0c1477e1f2a3"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/sentiment-analysis/movie_dev.jsonl": {"num_bytes": 205095, "checksum": "efaca51170a804c336d98f2e2443536ce4201449c76e8dabf0815fc505cb3219"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/sentiment-analysis/food_test.jsonl": {"num_bytes": 616236, "checksum": "688c2a108735307669e06359d138be4bf49a62b970a2ff3347802714296b69a4"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/sentiment-analysis/movie_test.jsonl": {"num_bytes": 482153, "checksum": "209a7c9169a5e2ba8dc57cd727197f4230377a6093e740bfbce04013599e5239"}}, "download_size": 8501704, "post_processing_size": null, "dataset_size": 6737753, "size_in_bytes": 15239457}}
parsinlu_sentiment.py ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ParsiNLU Persian reading comprehension task"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import json
21
+
22
+ import datasets
23
+ from datasets import NamedSplit
24
+
25
+ logger = datasets.logging.get_logger(__name__)
26
+
27
+ _CITATION = """\
28
+ @article{huggingface:dataset,
29
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
30
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
31
+ year={2020}
32
+ journal = {arXiv e-prints},
33
+ eprint = {2012.06154},
34
+ }
35
+ """
36
+
37
+ # You can copy an official description
38
+ _DESCRIPTION = """\
39
+ A Persian sentiment analysis task (deciding whether a given sentence contains a particular sentiment).
40
+ """
41
+
42
+ _HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
43
+
44
+ _LICENSE = "CC BY-NC-SA 4.0"
45
+
46
+ _URL = "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/sentiment-analysis/"
47
+ _URLs = {
48
+ "train": _URL + "ABSA_Dataset_train.jsonl",
49
+ "dev_food": _URL + "food_dev.jsonl",
50
+ "dev_movies": _URL + "movie_dev.jsonl",
51
+ "test_food": _URL + "food_test.jsonl",
52
+ "test_movies": _URL + "movie_test.jsonl",
53
+ }
54
+
55
+ TRAIN_ALL = NamedSplit("train")
56
+ TEST_FOOD = NamedSplit("test_food")
57
+ TEST_MOVIES = NamedSplit("test_movies")
58
+ VALIDATION_FOOD = NamedSplit("validation_food")
59
+ VALIDATION_MOVIES = NamedSplit("validation_movies")
60
+
61
+
62
+ class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
63
+ """ParsiNLU Persian reading comprehension task."""
64
+
65
+ VERSION = datasets.Version("1.0.0")
66
+
67
+ BUILDER_CONFIGS = [
68
+ datasets.BuilderConfig(
69
+ name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: sentiment-analysis"
70
+ ),
71
+ ]
72
+
73
+ def _info(self):
74
+ features = datasets.Features(
75
+ {
76
+ "review": datasets.Value("string"),
77
+ "review_id": datasets.Value("string"),
78
+ "example_id": datasets.Value("string"),
79
+ "excel_id": datasets.Value("string"),
80
+ "question": datasets.Value("string"),
81
+ "category": datasets.Value("string"),
82
+ "aspect": datasets.Value("string"),
83
+ "label": datasets.Value("string"),
84
+ "guid": datasets.Value("string"),
85
+ }
86
+ )
87
+
88
+ return datasets.DatasetInfo(
89
+ # This is the description that will appear on the datasets page.
90
+ description=_DESCRIPTION,
91
+ # This defines the different columns of the dataset and their types
92
+ features=features, # Here we define them above because they are different between the two configurations
93
+ # If there's a common (input, target) tuple from the features,
94
+ # specify them here. They'll be used if as_supervised=True in
95
+ # builder.as_dataset.
96
+ supervised_keys=None,
97
+ # Homepage of the dataset for documentation
98
+ homepage=_HOMEPAGE,
99
+ # License for the dataset if available
100
+ license=_LICENSE,
101
+ # Citation for the dataset
102
+ citation=_CITATION,
103
+ )
104
+
105
+
106
+
107
+ def _split_generators(self, dl_manager):
108
+ data_dir = dl_manager.download_and_extract(_URLs)
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=TRAIN_ALL,
112
+ # These kwargs will be passed to _generate_examples
113
+ gen_kwargs={
114
+ "filepath": data_dir["train"],
115
+ "split": "train",
116
+ },
117
+ ),
118
+ datasets.SplitGenerator(
119
+ name=TEST_FOOD,
120
+ # These kwargs will be passed to _generate_examples
121
+ gen_kwargs={"filepath": data_dir["test_food"], "split": "test_food"},
122
+ ),
123
+ datasets.SplitGenerator(
124
+ name=TEST_MOVIES,
125
+ # These kwargs will be passed to _generate_examples
126
+ gen_kwargs={"filepath": data_dir["test_movies"], "split": "test_movies"},
127
+ ),
128
+ datasets.SplitGenerator(
129
+ name=VALIDATION_FOOD,
130
+ # These kwargs will be passed to _generate_examples
131
+ gen_kwargs={
132
+ "filepath": data_dir["dev_food"],
133
+ "split": "dev_food",
134
+ },
135
+ ),
136
+ datasets.SplitGenerator(
137
+ name=VALIDATION_MOVIES,
138
+ # These kwargs will be passed to _generate_examples
139
+ gen_kwargs={
140
+ "filepath": data_dir["dev_movies"],
141
+ "split": "dev_movies",
142
+ },
143
+ ),
144
+ ]
145
+
146
+ def _generate_examples(self, filepath, split):
147
+ logger.info("generating examples from = %s", filepath)
148
+
149
+ with open(filepath, encoding="utf-8") as f:
150
+ for id_, row in enumerate(f.readlines()):
151
+ row = json.loads(row)
152
+ yield id_, row