Datasets:

Sub-tasks:
extractive-qa
Languages:
Persian
ArXiv:
License:
danyaljj commited on
Commit
d3d1438
1 Parent(s): bc4e442

adding the files

Browse files
Files changed (3) hide show
  1. README.md +171 -0
  2. dataset_infos.json +1 -0
  3. parsinlu_reading_comprehension.py +142 -0
README.md ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - fa
8
+ licenses:
9
+ - cc-by-nc-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended|wikipedia|google
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - extractive-qa
20
+ ---
21
+
22
+ # Dataset Card for PersiNLU (Reading Comprehension)
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for PersiNLU (Reading Comprehension)](#dataset-card-for-persi_nlu_reading_comprehension)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
39
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
40
+ - [Annotations](#annotations)
41
+ - [Annotation process](#annotation-process)
42
+ - [Who are the annotators?](#who-are-the-annotators)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+ - [Contributions](#contributions)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
57
+ - **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
58
+ - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
59
+ - **Leaderboard:**
60
+ - **Point of Contact:** d.khashabi@gmail.com
61
+
62
+ ### Dataset Summary
63
+
64
+ A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
65
+ The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More Information Needed]
70
+
71
+ ### Languages
72
+
73
+ The text dataset is in Persian (`fa`).
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ Here is an example from the dataset:
80
+ ```
81
+ {
82
+ 'question': 'پیامبر در چه سالی به پیامبری رسید؟',
83
+ 'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
84
+ 'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
85
+ 'answers': [
86
+ {'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
87
+ ]
88
+ }
89
+ ```
90
+
91
+ ### Data Fields
92
+
93
+ - `question`: the question, mined using Google auto-complete.
94
+ - `passage`: the passage that contains the answer.
95
+ - `url`: the url from which the passage was mined.
96
+ - `answers`: a list of answers, containing the string and the index of the answer.
97
+
98
+ ### Data Splits
99
+
100
+ The train/test split contains 600/575 samples.
101
+
102
+ ## Dataset Creation
103
+
104
+ ### Curation Rationale
105
+
106
+ The question were collected via Google auto-complete.
107
+ The answers were annotated by native speakers.
108
+ For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
109
+
110
+ ### Source Data
111
+
112
+ #### Initial Data Collection and Normalization
113
+
114
+ [More Information Needed]
115
+
116
+ #### Who are the source language producers?
117
+
118
+ [More Information Needed]
119
+
120
+ ### Annotations
121
+
122
+ #### Annotation process
123
+
124
+ [More Information Needed]
125
+
126
+ #### Who are the annotators?
127
+
128
+ [More Information Needed]
129
+
130
+ ### Personal and Sensitive Information
131
+
132
+ [More Information Needed]
133
+
134
+ ## Considerations for Using the Data
135
+
136
+ ### Social Impact of Dataset
137
+
138
+ [More Information Needed]
139
+
140
+ ### Discussion of Biases
141
+
142
+ [More Information Needed]
143
+
144
+ ### Other Known Limitations
145
+
146
+ [More Information Needed]
147
+
148
+ ## Additional Information
149
+
150
+ ### Dataset Curators
151
+
152
+ [More Information Needed]
153
+
154
+ ### Licensing Information
155
+
156
+ CC BY-NC-SA 4.0 License
157
+
158
+ ### Citation Information
159
+ ```bibtex
160
+ @article{huggingface:dataset,
161
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
162
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
163
+ year={2020}
164
+ journal = {arXiv e-prints},
165
+ eprint = {2012.06154},
166
+ }
167
+ ```
168
+
169
+ ### Contributions
170
+
171
+ Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"parsinlu-repo": {"description": "A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). \nThe questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers. \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "answer_text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 747679, "num_examples": 600, "dataset_name": "parsinlu_reading_comprehension"}, "test": {"name": "test", "num_bytes": 681945, "num_examples": 575, "dataset_name": "parsinlu_reading_comprehension"}, "validation": {"name": "validation", "num_bytes": 163185, "num_examples": 125, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/train.jsonl": {"num_bytes": 1933004, "checksum": "488fa21f303d880b82b8ba590e0c5a5b61dfb1442a96aa2db19f487a16f5e480"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/dev.jsonl": {"num_bytes": 424640, "checksum": "6ce2aed6d8ace6ed7f9ef4db9baba3b5efdfa9f99d605dccb494ce39cd63c9c6"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/eval.jsonl": {"num_bytes": 1760219, "checksum": "95ac9cec4548cb35a5b7b2d85dabbd89fe0e724245935fdeeaddea3c07e644fe"}}, "download_size": 4117863, "post_processing_size": null, "dataset_size": 1592809, "size_in_bytes": 5710672}}
parsinlu_reading_comprehension.py ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ParsiNLU Persian reading comprehension task"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ logger = datasets.logging.get_logger(__name__)
25
+
26
+ _CITATION = """\
27
+ @article{huggingface:dataset,
28
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
29
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
30
+ year={2020}
31
+ journal = {arXiv e-prints},
32
+ eprint = {2012.06154},
33
+ }
34
+ """
35
+
36
+ # You can copy an official description
37
+ _DESCRIPTION = """\
38
+ A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
39
+ The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
40
+ """
41
+
42
+ _HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
43
+
44
+ _LICENSE = "CC BY-NC-SA 4.0"
45
+
46
+ _URL = "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/"
47
+ _URLs = {
48
+ "train": _URL + "train.jsonl",
49
+ "dev": _URL + "dev.jsonl",
50
+ "test": _URL + "eval.jsonl",
51
+ }
52
+
53
+
54
+ class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
55
+ """ParsiNLU Persian reading comprehension task."""
56
+
57
+ VERSION = datasets.Version("1.0.0")
58
+
59
+ BUILDER_CONFIGS = [
60
+ datasets.BuilderConfig(
61
+ name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: reading-comprehension"
62
+ ),
63
+ ]
64
+
65
+ def _info(self):
66
+ features = datasets.Features(
67
+ {
68
+ "question": datasets.Value("string"),
69
+ "url": datasets.Value("string"),
70
+ "context": datasets.Value("string"),
71
+ "answers": datasets.features.Sequence(
72
+ {
73
+ "answer_start": datasets.Value("int32"),
74
+ "answer_text": datasets.Value("string"),
75
+ }
76
+ ),
77
+ }
78
+ )
79
+
80
+ return datasets.DatasetInfo(
81
+ # This is the description that will appear on the datasets page.
82
+ description=_DESCRIPTION,
83
+ # This defines the different columns of the dataset and their types
84
+ features=features, # Here we define them above because they are different between the two configurations
85
+ # If there's a common (input, target) tuple from the features,
86
+ # specify them here. They'll be used if as_supervised=True in
87
+ # builder.as_dataset.
88
+ supervised_keys=None,
89
+ # Homepage of the dataset for documentation
90
+ homepage=_HOMEPAGE,
91
+ # License for the dataset if available
92
+ license=_LICENSE,
93
+ # Citation for the dataset
94
+ citation=_CITATION,
95
+ )
96
+
97
+ def _split_generators(self, dl_manager):
98
+ data_dir = dl_manager.download_and_extract(_URLs)
99
+ return [
100
+ datasets.SplitGenerator(
101
+ name=datasets.Split.TRAIN,
102
+ # These kwargs will be passed to _generate_examples
103
+ gen_kwargs={
104
+ "filepath": data_dir["train"],
105
+ "split": "train",
106
+ },
107
+ ),
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.TEST,
110
+ # These kwargs will be passed to _generate_examples
111
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
112
+ ),
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.VALIDATION,
115
+ # These kwargs will be passed to _generate_examples
116
+ gen_kwargs={
117
+ "filepath": data_dir["dev"],
118
+ "split": "dev",
119
+ },
120
+ ),
121
+ ]
122
+
123
+ def _generate_examples(self, filepath, split):
124
+ logger.info("generating examples from = %s", filepath)
125
+
126
+ def get_answer_index(passage, answer):
127
+ return passage.index(answer) if answer in passage else -1
128
+
129
+ with open(filepath, encoding="utf-8") as f:
130
+ for id_, row in enumerate(f):
131
+ data = json.loads(row)
132
+ answer = data["answers"]
133
+ if type(answer[0]) == str:
134
+ answer = [{"answer_start": get_answer_index(data["passage"], x), "answer_text": x} for x in answer]
135
+ else:
136
+ answer = [{"answer_start": x[0], "answer_text": x[1]} for x in answer]
137
+ yield id_, {
138
+ "question": data["question"],
139
+ "url": str(data["url"]),
140
+ "context": data["passage"],
141
+ "answers": answer,
142
+ }