Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Size:
100K - 1M
License:
parquet-converter
commited on
Commit
•
9880274
1
Parent(s):
68b4843
Update parquet files
Browse files
.gitattributes
DELETED
@@ -1,55 +0,0 @@
|
|
1 |
-
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
-
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
-
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.h5 filter=lfs diff=lfs merge=lfs -text
|
8 |
-
*.joblib filter=lfs diff=lfs merge=lfs -text
|
9 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
-
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.pb filter=lfs diff=lfs merge=lfs -text
|
19 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pt filter=lfs diff=lfs merge=lfs -text
|
22 |
-
*.pth filter=lfs diff=lfs merge=lfs -text
|
23 |
-
*.rar filter=lfs diff=lfs merge=lfs -text
|
24 |
-
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
25 |
-
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
26 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
27 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
28 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
29 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
33 |
-
# Audio files - uncompressed
|
34 |
-
*.pcm filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*.sam filter=lfs diff=lfs merge=lfs -text
|
36 |
-
*.raw filter=lfs diff=lfs merge=lfs -text
|
37 |
-
# Audio files - compressed
|
38 |
-
*.aac filter=lfs diff=lfs merge=lfs -text
|
39 |
-
*.flac filter=lfs diff=lfs merge=lfs -text
|
40 |
-
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
41 |
-
*.ogg filter=lfs diff=lfs merge=lfs -text
|
42 |
-
*.wav filter=lfs diff=lfs merge=lfs -text
|
43 |
-
# Image files - uncompressed
|
44 |
-
*.bmp filter=lfs diff=lfs merge=lfs -text
|
45 |
-
*.gif filter=lfs diff=lfs merge=lfs -text
|
46 |
-
*.png filter=lfs diff=lfs merge=lfs -text
|
47 |
-
*.tiff filter=lfs diff=lfs merge=lfs -text
|
48 |
-
# Image files - compressed
|
49 |
-
*.jpg filter=lfs diff=lfs merge=lfs -text
|
50 |
-
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
51 |
-
*.webp filter=lfs diff=lfs merge=lfs -text
|
52 |
-
/Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/train/dataset_info.json filter=lfs diff=lfs merge=lfs -text
|
53 |
-
/Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/train/state.json filter=lfs diff=lfs merge=lfs -text
|
54 |
-
/Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/validation/dataset_info.json filter=lfs diff=lfs merge=lfs -text
|
55 |
-
/Users/knf792/PycharmProjects/NLP_course/nlp_course_tydiqa/tidyqa_answerable/validation/state.json filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
DELETED
@@ -1,118 +0,0 @@
|
|
1 |
-
---
|
2 |
-
annotations_creators:
|
3 |
-
- crowdsourced
|
4 |
-
language:
|
5 |
-
- en
|
6 |
-
- ar
|
7 |
-
- bn
|
8 |
-
- fi
|
9 |
-
- id
|
10 |
-
- ja
|
11 |
-
- sw
|
12 |
-
- ko
|
13 |
-
- ru
|
14 |
-
- te
|
15 |
-
- th
|
16 |
-
language_creators:
|
17 |
-
- crowdsourced
|
18 |
-
license:
|
19 |
-
- apache-2.0
|
20 |
-
multilinguality:
|
21 |
-
- multilingual
|
22 |
-
pretty_name: Answerable TyDi QA
|
23 |
-
size_categories:
|
24 |
-
- ['100K<n<1M']
|
25 |
-
source_datasets:
|
26 |
-
- extended|wikipedia
|
27 |
-
task_categories:
|
28 |
-
- question-answering
|
29 |
-
task_ids:
|
30 |
-
- extractive-qa
|
31 |
-
---
|
32 |
-
|
33 |
-
# Dataset Card for "answerable-tydiqa"
|
34 |
-
|
35 |
-
|
36 |
-
## Dataset Description
|
37 |
-
|
38 |
-
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
|
39 |
-
- **Paper:** [Paper](https://aclanthology.org/2020.tacl-1.30/)
|
40 |
-
- **Size of downloaded dataset files:** 75.43 MB
|
41 |
-
- **Size of the generated dataset:** 131.78 MB
|
42 |
-
- **Total amount of disk used:** 207.21 MB
|
43 |
-
|
44 |
-
### Dataset Summary
|
45 |
-
|
46 |
-
[TyDi QA](https://huggingface.co/datasets/tydiqa) is a question answering dataset covering 11 typologically diverse languages.
|
47 |
-
Answerable TyDi QA is an extension of the GoldP subtask of the original TyDi QA dataset to also include unanswertable questions.
|
48 |
-
|
49 |
-
## Dataset Structure
|
50 |
-
|
51 |
-
The dataset contains a train and a validation set, with 116067 and 13325 examples, respectively. Access them with
|
52 |
-
|
53 |
-
```py
|
54 |
-
from datasets import load_dataset
|
55 |
-
dataset = load_dataset("copenlu/answerable_tydiqa")
|
56 |
-
train_set = dataset["train"]
|
57 |
-
validation_set = dataset["validation"]
|
58 |
-
```
|
59 |
-
|
60 |
-
### Data Instances
|
61 |
-
|
62 |
-
Here is an example of an instance of the dataset:
|
63 |
-
|
64 |
-
```
|
65 |
-
{'question_text': 'dimanakah Dr. Ernest François Eugène Douwes Dekker meninggal?',
|
66 |
-
'document_title': 'Ernest Douwes Dekker',
|
67 |
-
'language': 'indonesian',
|
68 |
-
'annotations':
|
69 |
-
{'answer_start': [45],
|
70 |
-
'answer_text': ['28 Agustus 1950']
|
71 |
-
},
|
72 |
-
'document_plaintext': 'Ernest Douwes Dekker wafat dini hari tanggal 28 Agustus 1950 (tertulis di batu nisannya; 29 Agustus 1950 versi van der Veur, 2006) dan dimakamkan di TMP Cikutra, Bandung.',
|
73 |
-
'document_url': 'https://id.wikipedia.org/wiki/Ernest%20Douwes%20Dekker'}
|
74 |
-
```
|
75 |
-
|
76 |
-
Description of the dataset columns:
|
77 |
-
|
78 |
-
| Column name | type | Description |
|
79 |
-
| ----------- | ----------- | ----------- |
|
80 |
-
| document_title | str | The title of the Wikipedia article from which the data instance was generated |
|
81 |
-
| document_url | str | The URL of said article |
|
82 |
-
| language | str | The language of the data instance |
|
83 |
-
| question_text | str | The question to answer |
|
84 |
-
| document_plaintext | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question |
|
85 |
-
| annotations["answer_start"] | list[int] | The char index in 'document_plaintext' where the answer starts. If the question is unanswerable - [-1] |
|
86 |
-
| annotations["answer_text"] | list[str] | The answer, a span of text from 'document_plaintext'. If the question is unanswerable - [''] |
|
87 |
-
|
88 |
-
|
89 |
-
**Notice:** If the question is *answerable*, annotations["answer_start"] and annotations["answer_text"] contain a list of length 1
|
90 |
-
(In some variations of the dataset the lists might be longer, e.g. if more than one person annotated the instance, but not in our case).
|
91 |
-
If the question is *unanswerable*, annotations["answer_start"] will have "-1", while annotations["answer_text"] contain a list with an empty sring.
|
92 |
-
|
93 |
-
|
94 |
-
## Useful stuff
|
95 |
-
|
96 |
-
Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/quickstart) to learn how to manipulate and use the dataset. Specifically, you might find the following functions useful:
|
97 |
-
|
98 |
-
`dataset.filter`, for filtering out data (useful for keeping instances of specific languages, for example).
|
99 |
-
|
100 |
-
`dataset.map`, for manipulating the dataset.
|
101 |
-
|
102 |
-
`dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.
|
103 |
-
|
104 |
-
|
105 |
-
```
|
106 |
-
@article{tydiqa,
|
107 |
-
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
|
108 |
-
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
|
109 |
-
year = {2020},
|
110 |
-
journal = {Transactions of the Association for Computational Linguistics}
|
111 |
-
}
|
112 |
-
|
113 |
-
```
|
114 |
-
|
115 |
-
|
116 |
-
### Contributions
|
117 |
-
|
118 |
-
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/train-00000-of-00001-af2f3eaa87d1aa8b.parquet → copenlu--nlp_course_tydiqa/parquet-train.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d57ed6d329c51284a602ab90042a98fe09ce0edb92abab6a32d5f1f4178c9a72
|
3 |
+
size 72970367
|
data/validation-00000-of-00001-1f04eb244a33fa1b.parquet → copenlu--nlp_course_tydiqa/parquet-validation.parquet
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a23688b24adf383e7efe4b909c167490d62c5defe93b4f7c4c4733e468128668
|
3 |
+
size 7971643
|
dataset_infos.json
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
{"copenlu--nlp_course_tydiqa": {"description": "TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.\nThe languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language\nexpresses -- such that we expect models performing well on this set to generalize across a large number of the languages\nin the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic\ninformation-seeking task and avoid priming effects, questions are written by people who want to know the answer, but\ndon\u2019t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without\nthe use of translation (unlike MLQA and XQuAD).", "citation": "@article{tydiqa,\ntitle = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},\nauthor = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}\nyear = {2020},\njournal = {Transactions of the Association for Computational Linguistics}\n}", "homepage": "https://github.com/google-research-datasets/tydiqa", "license": "", "features": {"question_text": {"dtype": "string", "id": null, "_type": "Value"}, "document_title": {"dtype": "string", "id": null, "_type": "Value"}, "language": {"dtype": "string", "id": null, "_type": "Value"}, "annotations": {"answer_start": {"feature": {"dtype": "int64", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer_text": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "document_plaintext": {"dtype": "string", "id": null, "_type": "Value"}, "document_url": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": null, "config_name": null, "version": null, "splits": {"train": {"name": "train", "num_bytes": 124619340.13776328, "num_examples": 116067, "dataset_name": "nlp_course_tydiqa"}, "validation": {"name": "validation", "num_bytes": 13556959.14528953, "num_examples": 13325, "dataset_name": "nlp_course_tydiqa"}}, "download_checksums": null, "download_size": 79095134, "post_processing_size": null, "dataset_size": 138176299.2830528, "size_in_bytes": 217271433.2830528}}
|
|
|
|