Datasets:
Gholamreza
commited on
Commit
•
f1c12d6
1
Parent(s):
4a331c3
Update README.md
Browse files
README.md
CHANGED
@@ -1,39 +1,140 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
dataset_info:
|
3 |
features:
|
|
|
|
|
4 |
- name: title
|
5 |
dtype: string
|
6 |
-
- name:
|
7 |
-
|
8 |
-
|
|
|
|
|
|
|
|
|
9 |
dtype: string
|
10 |
-
- name:
|
11 |
-
|
12 |
-
|
13 |
-
list:
|
14 |
-
- name: answer_start
|
15 |
-
dtype: int64
|
16 |
-
- name: text
|
17 |
-
dtype: string
|
18 |
-
- name: id
|
19 |
-
dtype: string
|
20 |
-
- name: is_impossible
|
21 |
-
dtype: bool
|
22 |
-
- name: question
|
23 |
-
dtype: string
|
24 |
splits:
|
25 |
- name: train
|
26 |
-
num_bytes:
|
27 |
-
num_examples:
|
|
|
|
|
|
|
28 |
- name: test
|
29 |
-
num_bytes:
|
30 |
-
num_examples:
|
31 |
-
|
32 |
-
|
33 |
-
num_examples: 120
|
34 |
-
download_size: 12488927
|
35 |
-
dataset_size: 27405203
|
36 |
---
|
|
|
37 |
# Dataset Card for "pquad"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
-
[
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
pretty_name: PQuAD
|
3 |
+
annotations_creators:
|
4 |
+
- crowdsourced
|
5 |
+
language_creators:
|
6 |
+
- crowdsourced
|
7 |
+
language:
|
8 |
+
- fa
|
9 |
+
license:
|
10 |
+
- cc-by-sa-4.0
|
11 |
+
multilinguality:
|
12 |
+
- monolingual
|
13 |
+
size_categories:
|
14 |
+
- 10K<n<100K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- question-answering
|
19 |
+
task_ids:
|
20 |
+
- open-domain-qa
|
21 |
+
- extractive-qa
|
22 |
+
paperswithcode_id: squad
|
23 |
+
train-eval-index:
|
24 |
+
- config: pquad
|
25 |
+
task: question-answering
|
26 |
+
task_id: extractive_question_answering
|
27 |
+
splits:
|
28 |
+
train_split: train
|
29 |
+
eval_split: validation
|
30 |
+
col_mapping:
|
31 |
+
question: question
|
32 |
+
context: context
|
33 |
+
answers:
|
34 |
+
text: text
|
35 |
+
answer_start: answer_start
|
36 |
+
metrics:
|
37 |
+
- type: pquad
|
38 |
+
name: PQuAD
|
39 |
dataset_info:
|
40 |
features:
|
41 |
+
- name: id
|
42 |
+
dtype: int32
|
43 |
- name: title
|
44 |
dtype: string
|
45 |
+
- name: context
|
46 |
+
dtype: string
|
47 |
+
- name: question
|
48 |
+
dtype: string
|
49 |
+
- name: answers
|
50 |
+
sequence:
|
51 |
+
- name: text
|
52 |
dtype: string
|
53 |
+
- name: answer_start
|
54 |
+
dtype: int32
|
55 |
+
config_name: pquad
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
splits:
|
57 |
- name: train
|
58 |
+
num_bytes: ...
|
59 |
+
num_examples: 63994
|
60 |
+
- name: validation
|
61 |
+
num_bytes: ...
|
62 |
+
num_examples: 7976
|
63 |
- name: test
|
64 |
+
num_bytes: ...
|
65 |
+
num_examples: 8002
|
66 |
+
download_size: ...
|
67 |
+
dataset_size: ...
|
|
|
|
|
|
|
68 |
---
|
69 |
+
|
70 |
# Dataset Card for "pquad"
|
71 |
+
## PQuAD
|
72 |
+
|
73 |
+
*THIS IS A NON-OFFICIAL VERSION OF THE DATASET UPLOADED TO HUGGINGFACE BY [Gholamreza Dar](https://huggingface.co/Gholamreza)*
|
74 |
+
The original repository for the dataset is https://github.com/AUT-NLP/PQuAD
|
75 |
+
|
76 |
+
Original README.md:
|
77 |
+
|
78 |
+
|
79 |
+
PQuAD is a crowd- sourced reading comprehension dataset on Persian Language. It includes 80,000
|
80 |
+
questions along with their answers, with 25% of the questions being unanswerable. As a reading
|
81 |
+
comprehension dataset, it requires a system to read a passage and then answer the given questions
|
82 |
+
from the passage. PQuAD's questions are based on Persian Wikipedia articles and cover a wide
|
83 |
+
variety of subjects. Articles used for question generation are quality checked and include few
|
84 |
+
number of non-Persian words.
|
85 |
+
|
86 |
+
The dataset is divided into three categories including train, validation, and test sets and the
|
87 |
+
statistics of these sets are as follows:
|
88 |
+
|
89 |
+
```
|
90 |
+
+----------------------------+-------+------------+------+-------+
|
91 |
+
| | Train | Validation | Test | Total |
|
92 |
+
+----------------------------+-------+------------+------+-------+
|
93 |
+
| Total Questions | 63994 | 7976 | 8002 | 79972 |
|
94 |
+
| Unanswerable Questions | 15721 | 1981 | 1914 | 19616 |
|
95 |
+
| Mean # of paragraph tokens | 125 | 121 | 124 | 125 |
|
96 |
+
| Mean # of question tokens | 10 | 11 | 11 | 10 |
|
97 |
+
| Mean # of answer tokens | 5 | 6 | 5 | 5 |
|
98 |
+
+----------------------------+-------+------------+------+-------+
|
99 |
+
```
|
100 |
+
|
101 |
+
Workers were encouraged to use paraphrased sentences in their questions and avoid choosing the
|
102 |
+
answers comprising non-Persian words. Another group of crowdworkers validated the questions and
|
103 |
+
answers in the test and validation set to ensure their quality. They also provided additional
|
104 |
+
answers to the questions in test and validation sets if possible. This helps to consider all
|
105 |
+
possible types of answers and have a better evaluation of models.
|
106 |
+
|
107 |
+
PQuAD is stored in the JSON format and consists of passages where each passage is linked to a
|
108 |
+
set of questions. Answer(s) of the questions is specified with answer's span (start and end
|
109 |
+
point of answer in paragraph). Also, the unanswerable questions are marked as unanswerable.
|
110 |
+
|
111 |
+
The estimated human performance on the test set is 88.3% for F1 and 80.3% for EM. We have
|
112 |
+
evaluated PQuAD using two pre-trained transformer-based language models, namely ParsBERT
|
113 |
+
(Farahani et al., 2021) and XLM-RoBERTa (Conneau et al., 2020), as well as BiDAF (Levy et
|
114 |
+
al., 2017) which is an attention-based model proposed for MRC.
|
115 |
+
|
116 |
+
```
|
117 |
+
+-------------+------+------+-----------+-----------+-------------+
|
118 |
+
| Model | EM | F1 | HasAns_EM | HasAns_F1 | NoAns_EM/F1 |
|
119 |
+
+-------------+------+------+-----------+-----------+-------------+
|
120 |
+
| BNA | 54.4 | 71.4 | 43.9 | 66.4 | 87.6 |
|
121 |
+
| ParsBERT | 68.1 | 82.0 | 61.5 | 79.8 | 89.0 |
|
122 |
+
| XLM-RoBERTa | 74.8 | 87.6 | 69.1 | 86.0 | 92.7 |
|
123 |
+
| Human | 80.3 | 88.3 | 74.9 | 85.6 | 96.8 |
|
124 |
+
+-------------+------+------+-----------+-----------+-------------+
|
125 |
+
```
|
126 |
+
|
127 |
+
PQuAD is developed by Mabna Intelligent Computing at Amirkabir Science and Technology Park with
|
128 |
+
collaboration of the NLP lab of the Amirkabir University of Technology and is supported by the
|
129 |
+
Vice Presidency for Scientific and Technology. By releasing this dataset, we aim to ease research
|
130 |
+
on Persian reading comprehension and the development of Persian question answering systems.
|
131 |
+
|
132 |
+
This work is licensed under a
|
133 |
+
[Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa].
|
134 |
+
|
135 |
+
[![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa]
|
136 |
|
137 |
+
[cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/
|
138 |
+
[cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png
|
139 |
+
[cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-lightgrey.svg
|
140 |
+
# Dataset Card for "pquad"
|