Datasets:
Tasks:
Question Answering
Sub-tasks:
extractive-qa
Languages:
Persian
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
extended|wikipedia|google
ArXiv:
License:
parquet-converter
commited on
Commit
•
e6daca8
1
Parent(s):
afd64db
Update parquet files
Browse files- README.md +0 -194
- dataset_infos.json +0 -1
- parsinlu-repo/test/0000.parquet +3 -0
- parsinlu-repo/train/0000.parquet +3 -0
- parsinlu-repo/validation/0000.parquet +3 -0
- parsinlu_reading_comprehension.py +0 -141
README.md
DELETED
@@ -1,194 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
-
language_creators:
|
5 |
-
- expert-generated
|
6 |
-
language:
|
7 |
-
- fa
|
8 |
-
license:
|
9 |
-
- cc-by-nc-sa-4.0
|
10 |
-
multilinguality:
|
11 |
-
- monolingual
|
12 |
-
size_categories:
|
13 |
-
- 1K<n<10K
|
14 |
-
source_datasets:
|
15 |
-
- extended|wikipedia|google
|
16 |
-
task_categories:
|
17 |
-
- question-answering
|
18 |
-
task_ids:
|
19 |
-
- extractive-qa
|
20 |
-
paperswithcode_id: null
|
21 |
-
pretty_name: PersiNLU (Reading Comprehension)
|
22 |
-
dataset_info:
|
23 |
-
features:
|
24 |
-
- name: question
|
25 |
-
dtype: string
|
26 |
-
- name: url
|
27 |
-
dtype: string
|
28 |
-
- name: context
|
29 |
-
dtype: string
|
30 |
-
- name: answers
|
31 |
-
sequence:
|
32 |
-
- name: answer_start
|
33 |
-
dtype: int32
|
34 |
-
- name: answer_text
|
35 |
-
dtype: string
|
36 |
-
config_name: parsinlu-repo
|
37 |
-
splits:
|
38 |
-
- name: train
|
39 |
-
num_bytes: 747679
|
40 |
-
num_examples: 600
|
41 |
-
- name: test
|
42 |
-
num_bytes: 681945
|
43 |
-
num_examples: 575
|
44 |
-
- name: validation
|
45 |
-
num_bytes: 163185
|
46 |
-
num_examples: 125
|
47 |
-
download_size: 4117863
|
48 |
-
dataset_size: 1592809
|
49 |
-
---
|
50 |
-
|
51 |
-
# Dataset Card for PersiNLU (Reading Comprehension)
|
52 |
-
|
53 |
-
## Table of Contents
|
54 |
-
- [Dataset Description](#dataset-description)
|
55 |
-
- [Dataset Summary](#dataset-summary)
|
56 |
-
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
57 |
-
- [Languages](#languages)
|
58 |
-
- [Dataset Structure](#dataset-structure)
|
59 |
-
- [Data Instances](#data-instances)
|
60 |
-
- [Data Fields](#data-fields)
|
61 |
-
- [Data Splits](#data-splits)
|
62 |
-
- [Dataset Creation](#dataset-creation)
|
63 |
-
- [Curation Rationale](#curation-rationale)
|
64 |
-
- [Source Data](#source-data)
|
65 |
-
- [Annotations](#annotations)
|
66 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
67 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
68 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
69 |
-
- [Discussion of Biases](#discussion-of-biases)
|
70 |
-
- [Other Known Limitations](#other-known-limitations)
|
71 |
-
- [Additional Information](#additional-information)
|
72 |
-
- [Dataset Curators](#dataset-curators)
|
73 |
-
- [Licensing Information](#licensing-information)
|
74 |
-
- [Citation Information](#citation-information)
|
75 |
-
- [Contributions](#contributions)
|
76 |
-
|
77 |
-
## Dataset Description
|
78 |
-
|
79 |
-
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
|
80 |
-
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
|
81 |
-
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
|
82 |
-
- **Leaderboard:**
|
83 |
-
- **Point of Contact:** [email](d.khashabi@gmail.com)
|
84 |
-
|
85 |
-
### Dataset Summary
|
86 |
-
|
87 |
-
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
|
88 |
-
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
|
89 |
-
|
90 |
-
### Supported Tasks and Leaderboards
|
91 |
-
|
92 |
-
[More Information Needed]
|
93 |
-
|
94 |
-
### Languages
|
95 |
-
|
96 |
-
The text dataset is in Persian (`fa`).
|
97 |
-
|
98 |
-
## Dataset Structure
|
99 |
-
|
100 |
-
### Data Instances
|
101 |
-
|
102 |
-
Here is an example from the dataset:
|
103 |
-
```
|
104 |
-
{
|
105 |
-
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
|
106 |
-
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
|
107 |
-
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
|
108 |
-
'answers': [
|
109 |
-
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
|
110 |
-
]
|
111 |
-
}
|
112 |
-
```
|
113 |
-
|
114 |
-
### Data Fields
|
115 |
-
|
116 |
-
- `question`: the question, mined using Google auto-complete.
|
117 |
-
- `passage`: the passage that contains the answer.
|
118 |
-
- `url`: the url from which the passage was mined.
|
119 |
-
- `answers`: a list of answers, containing the string and the index of the answer with the fields `answer_start` and `answer_text`. Note that in the test set, some `answer_start` values are missing and replaced with `-1`
|
120 |
-
|
121 |
-
### Data Splits
|
122 |
-
|
123 |
-
The train/test split contains 600/575 samples.
|
124 |
-
|
125 |
-
## Dataset Creation
|
126 |
-
|
127 |
-
### Curation Rationale
|
128 |
-
|
129 |
-
The question were collected via Google auto-complete.
|
130 |
-
The answers were annotated by native speakers.
|
131 |
-
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
|
132 |
-
|
133 |
-
### Source Data
|
134 |
-
|
135 |
-
#### Initial Data Collection and Normalization
|
136 |
-
|
137 |
-
[More Information Needed]
|
138 |
-
|
139 |
-
#### Who are the source language producers?
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
-
|
143 |
-
### Annotations
|
144 |
-
|
145 |
-
#### Annotation process
|
146 |
-
|
147 |
-
[More Information Needed]
|
148 |
-
|
149 |
-
#### Who are the annotators?
|
150 |
-
|
151 |
-
[More Information Needed]
|
152 |
-
|
153 |
-
### Personal and Sensitive Information
|
154 |
-
|
155 |
-
[More Information Needed]
|
156 |
-
|
157 |
-
## Considerations for Using the Data
|
158 |
-
|
159 |
-
### Social Impact of Dataset
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
### Discussion of Biases
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
### Other Known Limitations
|
168 |
-
|
169 |
-
Dataset provided for research purposes only. Please check dataset license for additional information.
|
170 |
-
|
171 |
-
## Additional Information
|
172 |
-
|
173 |
-
### Dataset Curators
|
174 |
-
|
175 |
-
[More Information Needed]
|
176 |
-
|
177 |
-
### Licensing Information
|
178 |
-
|
179 |
-
CC BY-NC-SA 4.0 License
|
180 |
-
|
181 |
-
### Citation Information
|
182 |
-
```bibtex
|
183 |
-
@article{huggingface:dataset,
|
184 |
-
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
|
185 |
-
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
|
186 |
-
year={2020}
|
187 |
-
journal = {arXiv e-prints},
|
188 |
-
eprint = {2012.06154},
|
189 |
-
}
|
190 |
-
```
|
191 |
-
|
192 |
-
### Contributions
|
193 |
-
|
194 |
-
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"parsinlu-repo": {"description": "A Persian reading comprehenion task (generating an answer, given a question and a context paragraph). \nThe questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers. \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "answer_text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 747679, "num_examples": 600, "dataset_name": "parsinlu_reading_comprehension"}, "test": {"name": "test", "num_bytes": 681945, "num_examples": 575, "dataset_name": "parsinlu_reading_comprehension"}, "validation": {"name": "validation", "num_bytes": 163185, "num_examples": 125, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/train.jsonl": {"num_bytes": 1933004, "checksum": "488fa21f303d880b82b8ba590e0c5a5b61dfb1442a96aa2db19f487a16f5e480"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/dev.jsonl": {"num_bytes": 424640, "checksum": "6ce2aed6d8ace6ed7f9ef4db9baba3b5efdfa9f99d605dccb494ce39cd63c9c6"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/eval.jsonl": {"num_bytes": 1760219, "checksum": "95ac9cec4548cb35a5b7b2d85dabbd89fe0e724245935fdeeaddea3c07e644fe"}}, "download_size": 4117863, "post_processing_size": null, "dataset_size": 1592809, "size_in_bytes": 5710672}}
|
|
|
|
parsinlu-repo/test/0000.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a843824890d9491d91cf5061cd68f938afb0702e10b7ea679ef9b61e63979ffa
|
3 |
+
size 322194
|
parsinlu-repo/train/0000.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:12737d6016dd6f62c15b93a0a974c943fb5a00d139ab46d5df45921e8a1816a6
|
3 |
+
size 363221
|
parsinlu-repo/validation/0000.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:31e429fdd2a1897ead6d2010c159792ec7f4387ecf9c88416a77df382eda95ae
|
3 |
+
size 91963
|
parsinlu_reading_comprehension.py
DELETED
@@ -1,141 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
"""ParsiNLU Persian reading comprehension task"""
|
16 |
-
|
17 |
-
|
18 |
-
import json
|
19 |
-
|
20 |
-
import datasets
|
21 |
-
|
22 |
-
|
23 |
-
logger = datasets.logging.get_logger(__name__)
|
24 |
-
|
25 |
-
_CITATION = """\
|
26 |
-
@article{huggingface:dataset,
|
27 |
-
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
|
28 |
-
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
|
29 |
-
year={2020}
|
30 |
-
journal = {arXiv e-prints},
|
31 |
-
eprint = {2012.06154},
|
32 |
-
}
|
33 |
-
"""
|
34 |
-
|
35 |
-
# You can copy an official description
|
36 |
-
_DESCRIPTION = """\
|
37 |
-
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
|
38 |
-
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
|
39 |
-
"""
|
40 |
-
|
41 |
-
_HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
|
42 |
-
|
43 |
-
_LICENSE = "CC BY-NC-SA 4.0"
|
44 |
-
|
45 |
-
_URL = "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/reading_comprehension/"
|
46 |
-
_URLs = {
|
47 |
-
"train": _URL + "train.jsonl",
|
48 |
-
"dev": _URL + "dev.jsonl",
|
49 |
-
"test": _URL + "eval.jsonl",
|
50 |
-
}
|
51 |
-
|
52 |
-
|
53 |
-
class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
|
54 |
-
"""ParsiNLU Persian reading comprehension task."""
|
55 |
-
|
56 |
-
VERSION = datasets.Version("1.0.0")
|
57 |
-
|
58 |
-
BUILDER_CONFIGS = [
|
59 |
-
datasets.BuilderConfig(
|
60 |
-
name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: reading-comprehension"
|
61 |
-
),
|
62 |
-
]
|
63 |
-
|
64 |
-
def _info(self):
|
65 |
-
features = datasets.Features(
|
66 |
-
{
|
67 |
-
"question": datasets.Value("string"),
|
68 |
-
"url": datasets.Value("string"),
|
69 |
-
"context": datasets.Value("string"),
|
70 |
-
"answers": datasets.features.Sequence(
|
71 |
-
{
|
72 |
-
"answer_start": datasets.Value("int32"),
|
73 |
-
"answer_text": datasets.Value("string"),
|
74 |
-
}
|
75 |
-
),
|
76 |
-
}
|
77 |
-
)
|
78 |
-
|
79 |
-
return datasets.DatasetInfo(
|
80 |
-
# This is the description that will appear on the datasets page.
|
81 |
-
description=_DESCRIPTION,
|
82 |
-
# This defines the different columns of the dataset and their types
|
83 |
-
features=features, # Here we define them above because they are different between the two configurations
|
84 |
-
# If there's a common (input, target) tuple from the features,
|
85 |
-
# specify them here. They'll be used if as_supervised=True in
|
86 |
-
# builder.as_dataset.
|
87 |
-
supervised_keys=None,
|
88 |
-
# Homepage of the dataset for documentation
|
89 |
-
homepage=_HOMEPAGE,
|
90 |
-
# License for the dataset if available
|
91 |
-
license=_LICENSE,
|
92 |
-
# Citation for the dataset
|
93 |
-
citation=_CITATION,
|
94 |
-
)
|
95 |
-
|
96 |
-
def _split_generators(self, dl_manager):
|
97 |
-
data_dir = dl_manager.download_and_extract(_URLs)
|
98 |
-
return [
|
99 |
-
datasets.SplitGenerator(
|
100 |
-
name=datasets.Split.TRAIN,
|
101 |
-
# These kwargs will be passed to _generate_examples
|
102 |
-
gen_kwargs={
|
103 |
-
"filepath": data_dir["train"],
|
104 |
-
"split": "train",
|
105 |
-
},
|
106 |
-
),
|
107 |
-
datasets.SplitGenerator(
|
108 |
-
name=datasets.Split.TEST,
|
109 |
-
# These kwargs will be passed to _generate_examples
|
110 |
-
gen_kwargs={"filepath": data_dir["test"], "split": "test"},
|
111 |
-
),
|
112 |
-
datasets.SplitGenerator(
|
113 |
-
name=datasets.Split.VALIDATION,
|
114 |
-
# These kwargs will be passed to _generate_examples
|
115 |
-
gen_kwargs={
|
116 |
-
"filepath": data_dir["dev"],
|
117 |
-
"split": "dev",
|
118 |
-
},
|
119 |
-
),
|
120 |
-
]
|
121 |
-
|
122 |
-
def _generate_examples(self, filepath, split):
|
123 |
-
logger.info("generating examples from = %s", filepath)
|
124 |
-
|
125 |
-
def get_answer_index(passage, answer):
|
126 |
-
return passage.index(answer) if answer in passage else -1
|
127 |
-
|
128 |
-
with open(filepath, encoding="utf-8") as f:
|
129 |
-
for id_, row in enumerate(f):
|
130 |
-
data = json.loads(row)
|
131 |
-
answer = data["answers"]
|
132 |
-
if type(answer[0]) == str:
|
133 |
-
answer = [{"answer_start": get_answer_index(data["passage"], x), "answer_text": x} for x in answer]
|
134 |
-
else:
|
135 |
-
answer = [{"answer_start": x[0], "answer_text": x[1]} for x in answer]
|
136 |
-
yield id_, {
|
137 |
-
"question": data["question"],
|
138 |
-
"url": str(data["url"]),
|
139 |
-
"context": data["passage"],
|
140 |
-
"answers": answer,
|
141 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|