Datasets:

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
danyaljj commited on
Commit
1d3824e
1 Parent(s): ee7e70e

add the reader

Browse files
Files changed (3) hide show
  1. README.md +167 -0
  2. dataset_infos.json +1 -0
  3. parsinlu_query_paraphrasing.py +124 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - fa
8
+ licenses:
9
+ - cc-by-nc-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended|quora|google
16
+ task_categories:
17
+ - query-paraphrasing
18
+ task_ids:
19
+ - query-paraphrasing
20
+ ---
21
+
22
+ # Dataset Card for PersiNLU (Query Paraphrasing)
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for PersiNLU (Query Paraphrasing)](#dataset-card-for-persi_nlu_query_paraphrasing)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+ - [Contributions](#contributions)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
57
+ - **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
58
+ - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
59
+ - **Leaderboard:**
60
+ - **Point of Contact:** d.khashabi@gmail.com
61
+
62
+ ### Dataset Summary
63
+
64
+ A Persian query paraphrasng task (deciding whether two questions are paraphrases of each other).
65
+ The questions are partially generated from Google auto-complete, and partially translated from the Quora paraphrasing dataset.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+
73
+ The text dataset is in Persian (`fa`).
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ Here is an example from the dataset:
80
+ ```json
81
+ {
82
+ "q1": "اعمال حج تمتع از چه روزی شروع میشود؟",
83
+ "q2": "ویار از چه روزی شروع میشود؟",
84
+ "label": "0",
85
+ "category": "natural"
86
+ }
87
+ ```
88
+
89
+ ### Data Fields
90
+
91
+ - `q1`: the first question.
92
+ - `q2`: the second question.
93
+ - `category`: whether the questions are mined from Quora (`qqp`) or they're extracted from Google auto-complete (`natural`).
94
+ - `label`: `1` if the questions are paraphrases; `0` otherwise.
95
+
96
+ ### Data Splits
97
+
98
+ The train/dev/test splits contains 1830/898/1916 samples.
99
+
100
+ ## Dataset Creation
101
+
102
+ ### Curation Rationale
103
+
104
+ For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
105
+
106
+ ### Source Data
107
+
108
+ #### Initial Data Collection and Normalization
109
+
110
+ [More Information Needed]
111
+
112
+ #### Who are the source language producers?
113
+
114
+ [More Information Needed]
115
+
116
+ ### Annotations
117
+
118
+ #### Annotation process
119
+
120
+ [More Information Needed]
121
+
122
+ #### Who are the annotators?
123
+
124
+ [More Information Needed]
125
+
126
+ ### Personal and Sensitive Information
127
+
128
+ [More Information Needed]
129
+
130
+ ## Considerations for Using the Data
131
+
132
+ ### Social Impact of Dataset
133
+
134
+ [More Information Needed]
135
+
136
+ ### Discussion of Biases
137
+
138
+ [More Information Needed]
139
+
140
+ ### Other Known Limitations
141
+
142
+ [More Information Needed]
143
+
144
+ ## Additional Information
145
+
146
+ ### Dataset Curators
147
+
148
+ [More Information Needed]
149
+
150
+ ### Licensing Information
151
+
152
+ CC BY-NC-SA 4.0 License
153
+
154
+ ### Citation Information
155
+ ```bibtex
156
+ @article{huggingface:dataset,
157
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
158
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
159
+ year={2020}
160
+ journal = {arXiv e-prints},
161
+ eprint = {2012.06154},
162
+ }
163
+ ```
164
+
165
+ ### Contributions
166
+
167
+ Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"parsinlu-repo": {"description": "A Persian query paraphrasing task (paraphrase or not, given two questions). \nThe questions are partly mined using Google auto-complete, and partly translated from Quora paraphrasing dataset. \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"q1": {"dtype": "string", "id": null, "_type": "Value"}, "q2": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 368062, "num_examples": 1830, "dataset_name": "parsinlu_reading_comprehension"}, "test": {"name": "test", "num_bytes": 336290, "num_examples": 1916, "dataset_name": "parsinlu_reading_comprehension"}, "validation": {"name": "validation", "num_bytes": 184671, "num_examples": 898, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/qqp/train.jsonl": {"num_bytes": 430344, "checksum": "e4ca0e4d4b02ebb530d4fd4f3b76396f7e5e25b1df0c4d4a9fc231f4860bbabb"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/qqp/dev.jsonl": {"num_bytes": 215221, "checksum": "f7515718dde0c42df8430bedbc6f702b63add387de740430d2826702a8b74438"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/qqp/test.jsonl": {"num_bytes": 401438, "checksum": "5881f70203e937308ffe2cfd0a1da1ac29499d18bbfa219fe9382c42e12c4070"}}, "download_size": 1047003, "post_processing_size": null, "dataset_size": 889023, "size_in_bytes": 1936026}}
parsinlu_query_paraphrasing.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ParsiNLU Persian reading comprehension task"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+ _CITATION = """\
27
+ @article{huggingface:dataset,
28
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
29
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
30
+ year={2020}
31
+ journal = {arXiv e-prints},
32
+ eprint = {2012.06154},
33
+ }
34
+ """
35
+
36
+ # You can copy an official description
37
+ _DESCRIPTION = """\
38
+ A Persian query paraphrasing task (paraphrase or not, given two questions).
39
+ The questions are partly mined using Google auto-complete, and partly translated from Quora paraphrasing dataset.
40
+ """
41
+
42
+ _HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
43
+
44
+ _LICENSE = "CC BY-NC-SA 4.0"
45
+
46
+ _URL = "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/qqp/"
47
+ _URLs = {
48
+ "train": _URL + "train.jsonl",
49
+ "dev": _URL + "dev.jsonl",
50
+ "test": _URL + "test.jsonl",
51
+ }
52
+
53
+
54
+ class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
55
+ """ParsiNLU Persian reading comprehension task."""
56
+
57
+ VERSION = datasets.Version("1.0.0")
58
+
59
+ BUILDER_CONFIGS = [
60
+ datasets.BuilderConfig(
61
+ name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: query-paraphrasing"
62
+ ),
63
+ ]
64
+
65
+ def _info(self):
66
+ features = datasets.Features(
67
+ {
68
+ "q1": datasets.Value("string"),
69
+ "q2": datasets.Value("string"),
70
+ "category": datasets.Value("string"),
71
+ "label": datasets.Value("string"),
72
+ }
73
+ )
74
+
75
+ return datasets.DatasetInfo(
76
+ # This is the description that will appear on the datasets page.
77
+ description=_DESCRIPTION,
78
+ # This defines the different columns of the dataset and their types
79
+ features=features, # Here we define them above because they are different between the two configurations
80
+ # If there's a common (input, target) tuple from the features,
81
+ # specify them here. They'll be used if as_supervised=True in
82
+ # builder.as_dataset.
83
+ supervised_keys=None,
84
+ # Homepage of the dataset for documentation
85
+ homepage=_HOMEPAGE,
86
+ # License for the dataset if available
87
+ license=_LICENSE,
88
+ # Citation for the dataset
89
+ citation=_CITATION,
90
+ )
91
+
92
+ def _split_generators(self, dl_manager):
93
+ data_dir = dl_manager.download_and_extract(_URLs)
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ # These kwargs will be passed to _generate_examples
98
+ gen_kwargs={
99
+ "filepath": data_dir["train"],
100
+ "split": "train",
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.TEST,
105
+ # These kwargs will be passed to _generate_examples
106
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
107
+ ),
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.VALIDATION,
110
+ # These kwargs will be passed to _generate_examples
111
+ gen_kwargs={
112
+ "filepath": data_dir["dev"],
113
+ "split": "dev",
114
+ },
115
+ ),
116
+ ]
117
+
118
+ def _generate_examples(self, filepath, split):
119
+ logger.info("generating examples from = %s", filepath)
120
+
121
+ with open(filepath, encoding="utf-8") as f:
122
+ for id_, row in enumerate(f):
123
+ data = json.loads(row)
124
+ yield id_, data