system HF staff commited on
Commit
6ef626f
0 Parent(s):

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - th
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended|other-iapp-wiki-qa-dataset
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - extractive-qa
20
+ - open-domain-qa
21
+ ---
22
+
23
+ # Dataset Card for `iapp_wiki_qa_squad`
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** https://github.com/iapp-technology/iapp-wiki-qa-dataset
52
+ - **Repository:** https://github.com/iapp-technology/iapp-wiki-qa-dataset
53
+ - **Paper:**
54
+ - **Leaderboard:**
55
+ - **Point of Contact:** https://github.com/iapp-technology/iapp-wiki-qa-dataset
56
+
57
+ ### Dataset Summary
58
+
59
+ `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ extractive question answering
64
+
65
+ ### Languages
66
+
67
+ Thai
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ An example from the dataset:
74
+ ```
75
+ {'article_id': '0U2lA8nJQESIxbZrjZQc',
76
+ 'question_id': '0U2lA8nJQESIxbZrjZQc_000',
77
+ 'context': 'นายสุวัฒน์ วรรณศิริกุล (1 พฤศจิกายน พ.ศ. 2476 - 31 กรกฎาคม พ.ศ. 2555) อดีตรองหัวหน้าพรรคพลังประชาชน อดีตประธานสมาชิกสภาผู้แทนราษฎร และประธานภาคกรุงเทพมหานคร พรรคพลังประชาชน อดีตสมาชิกสภาผู้แทนราษฎรกรุงเทพมหานครหลายสมัย ได้รับการเลือกตั้งเป็นสมาชิกสภาผู้แทนราษฎรครั้งแรกในปี พ.ศ. 2529 ในสังกัดพรรคประชากรไทย และสังกัดพรรคพลังประชาชน เป็นพรรคสุดท้าย',
78
+ 'question': 'สุวัฒน์ วรรณศิริกุล เกิดวันที่เท่าไร',
79
+ 'answers': {'text': ['1 พฤศจิกายน พ.ศ. 2476'],
80
+ 'answer_start': [24],
81
+ 'answer_end': [45]},
82
+ 'title': 'สุวัฒน์ วรรณศิริกุล',
83
+ 'created_by': 'gmnjGRF0y0g7QRZDd9Qgz3AgiHJ3',
84
+ 'created_on': '2019-08-18 05:05:51.358000+00:00',
85
+ 'is_pay': {'date': None, 'status': False}}
86
+ {'article_id': '01KZTrxgvC5mOovXFMPJ',
87
+ 'question_id': '01KZTrxgvC5mOovXFMPJ_000',
88
+ 'context': 'พัทธ์ธีรา ศรุติพงศ์โภคิน (เกิด 3 ธันวาคม พ.ศ. 2533) หรือชื่อเล่นว่า อร เป็นนักแสดงหญิงชาวไทย สำเร็จมัธยมศึกษาจากCatholic Cathedral College ประเทศนิวซีแลนด์ และปริญญาตรีจากRaffles International College สาขา Business Marketing\n\nเข้าสู่วงการตั้งแต่อายุ 6 ขวบ จากการแสดงละครเวทีกับ ครูชลประคัลภ์ จันทร์เรือง จากนั้นก็เล่นโฆษณาในวัยเด็ก 2- 3 ชิ้น และยังเคยแสดงช่วงละครสั้น ในรายการซุปเปอร์จิ๋ว ประมาณปี 2542\n\nปัจจุบันเป็นทั้ง นักแสดง , พิธีกร และ วีเจ อยู่ที่คลื่น เก็ท 102.5 Bangkok International Hits Music Station และยังเป็นพิธีกรให้กับช่อง ทรู มิวสิก',
89
+ 'question': 'พัทธ์ธีรา ศรุติพงศ์โภคิน เกิดวันที่เท่าไร',
90
+ 'answers': {'text': ['3 ธันวาคม พ.ศ. 2533'],
91
+ 'answer_start': [31],
92
+ 'answer_end': [50]},
93
+ 'title': 'พัทธ์ธีรา ศรุติพงศ์โภคิน',
94
+ 'created_by': 'gmnjGRF0y0g7QRZDd9Qgz3AgiHJ3',
95
+ 'created_on': '2019-08-07 14:00:38.778000+00:00',
96
+ 'is_pay': {'status': True,
97
+ 'total': 2.5,
98
+ 'date': '2019-08-13 10:47:28.095000+00:00'}}
99
+ ```
100
+
101
+ ### Data Fields
102
+
103
+ ```
104
+ {
105
+ "question_id": question id
106
+ "article_id": article id
107
+ "title": article title
108
+ "context": article texts
109
+ "question": question
110
+ "answers":
111
+ {
112
+ "text": answer text
113
+ "answer_start": answer beginning position
114
+ "answer_end": answer exclusive upper bound position
115
+ }
116
+ ),
117
+ }
118
+ ```
119
+
120
+ ### Data Splits
121
+
122
+ | | train | valid | test |
123
+ |-------------|-------|-------|------|
124
+ | # questions | 5761 | 742 | 739 |
125
+ | # articles | 1529 | 191 | 192 |
126
+
127
+ ## Dataset Creation
128
+
129
+ ### Curation Rationale
130
+
131
+ [More Information Needed]
132
+
133
+ ### Source Data
134
+
135
+ #### Initial Data Collection and Normalization
136
+
137
+ From the original `iapp-wiki-qa-dataset`, [@cstorm125](https://github.com/cstorm125/) applied the following processing:
138
+
139
+ - Select questions with one, non-empty answer
140
+ - Select questions whose answers match `textDetection` fields
141
+ - Select questions whose answers are 100-character long or shorter
142
+ - 80/10/10 train-validation-split at article level
143
+
144
+ #### Who are the source language producers?
145
+
146
+ Wikipedia authors for contexts and annotators hired by [iApp](https://iapp.co.th/) for questions and answer annotations
147
+
148
+ ### Annotations
149
+
150
+ #### Annotation process
151
+
152
+ Annotators hired by [iApp](https://iapp.co.th/) are asked create questions and answers for each article.
153
+
154
+ #### Who are the annotators?
155
+
156
+ Annotators hired by [iApp](https://iapp.co.th/)
157
+
158
+ ### Personal and Sensitive Information
159
+
160
+ All contents are from Wikipedia. No personal and sensitive information is expected to be included.
161
+
162
+ ## Considerations for Using the Data
163
+
164
+ ### Social Impact of Dataset
165
+
166
+ - open-domain, extractive question answering in Thai
167
+
168
+ ### Discussion of Biases
169
+
170
+ [More Information Needed]
171
+
172
+ ### Other Known Limitations
173
+
174
+ [More Information Needed]
175
+
176
+ ## Additional Information
177
+
178
+ ### Dataset Curators
179
+
180
+ Original dataset by [iApp](https://iapp.co.th/). SQuAD formattting by [PyThaiNLP](https://github.com/PyThaiNLP/).
181
+
182
+ ### Licensing Information
183
+
184
+ MIT
185
+
186
+ ### Citation Information
187
+
188
+ ```
189
+ @dataset{kobkrit_viriyayudhakorn_2021_4539916,
190
+ author = {Kobkrit Viriyayudhakorn and
191
+ Charin Polpanumas},
192
+ title = {iapp\_wiki\_qa\_squad},
193
+ month = feb,
194
+ year = 2021,
195
+ publisher = {Zenodo},
196
+ version = 1,
197
+ doi = {10.5281/zenodo.4539916},
198
+ url = {https://doi.org/10.5281/zenodo.4539916}
199
+ }
200
+ ```
201
+
202
+ ### Contributions
203
+
204
+ Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"iapp_wiki_qa_squad": {"description": "`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.\nIt is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)\nto [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in\n5761/742/739 questions from 1529/191/192 articles.\n", "citation": "@dataset{kobkrit_viriyayudhakorn_2021_4539916,\n author = {Kobkrit Viriyayudhakorn and\n Charin Polpanumas},\n title = {iapp\\_wiki\\_qa\\_squad},\n month = feb,\n year = 2021,\n publisher = {Zenodo},\n version = 1,\n doi = {10.5281/zenodo.4539916},\n url = {https://doi.org/10.5281/zenodo.4539916}\n}\n", "homepage": "https://github.com/iapp-technology/iapp-wiki-qa-dataset/", "license": "", "features": {"question_id": {"dtype": "string", "id": null, "_type": "Value"}, "article_id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "answer_start": {"dtype": "int32", "id": null, "_type": "Value"}, "answer_end": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "iapp_wiki_qa_squad", "config_name": "iapp_wiki_qa_squad", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 16107541, "num_examples": 5761, "dataset_name": "iapp_wiki_qa_squad"}, "validation": {"name": "validation", "num_bytes": 2120768, "num_examples": 742, "dataset_name": "iapp_wiki_qa_squad"}, "test": {"name": "test", "num_bytes": 2032016, "num_examples": 739, "dataset_name": "iapp_wiki_qa_squad"}}, "download_checksums": {"https://github.com/iapp-technology/iapp-wiki-qa-dataset/raw/main/squad_format/data.zip": {"num_bytes": 2876630, "checksum": "a3530ceafd3b39757fc9720ee0e04f1454e6775bec0c8641964427567fafb214"}}, "download_size": 2876630, "post_processing_size": null, "dataset_size": 20260325, "size_in_bytes": 23136955}}
dummy/iapp_wiki_qa_squad/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f1a4b703afb0ef0bcf591b877d9ab743034bbe00dad7b054e30acac5e79f235
3
+ size 6376
iapp_wiki_qa_squad.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import absolute_import, division, print_function
2
+
3
+ import json
4
+ import os
5
+
6
+ import datasets
7
+
8
+
9
+ _CITATION = """\
10
+ @dataset{kobkrit_viriyayudhakorn_2021_4539916,
11
+ author = {Kobkrit Viriyayudhakorn and
12
+ Charin Polpanumas},
13
+ title = {iapp_wiki_qa_squad},
14
+ month = feb,
15
+ year = 2021,
16
+ publisher = {Zenodo},
17
+ version = 1,
18
+ doi = {10.5281/zenodo.4539916},
19
+ url = {https://doi.org/10.5281/zenodo.4539916}
20
+ }
21
+ """
22
+
23
+ _DESCRIPTION = """\
24
+ `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.
25
+ It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)
26
+ to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in
27
+ 5761/742/739 questions from 1529/191/192 articles.
28
+ """
29
+
30
+
31
+ class IappWikiQaSquadConfig(datasets.BuilderConfig):
32
+ def __init__(self, **kwargs):
33
+ """BuilderConfig
34
+
35
+ Args:
36
+ **kwargs: keyword arguments forwarded to super.
37
+ """
38
+ super(IappWikiQaSquadConfig, self).__init__(**kwargs)
39
+
40
+
41
+ class IappWikiQaSquad(datasets.GeneratorBasedBuilder):
42
+ _DOWNLOAD_URL = "https://github.com/iapp-technology/iapp-wiki-qa-dataset/raw/main/squad_format/data.zip"
43
+ _TRAIN_FILE = "train.jsonl"
44
+ _VALID_FILE = "valid.jsonl"
45
+ _TEST_FILE = "test.jsonl"
46
+
47
+ BUILDER_CONFIGS = [
48
+ IappWikiQaSquadConfig(
49
+ name="iapp_wiki_qa_squad",
50
+ version=datasets.Version("1.0.0"),
51
+ description=_DESCRIPTION,
52
+ ),
53
+ ]
54
+
55
+ def _info(self):
56
+ return datasets.DatasetInfo(
57
+ # This is the description that will appear on the datasets page.
58
+ description=_DESCRIPTION,
59
+ # datasets.features.FeatureConnectors
60
+ features=datasets.Features(
61
+ {
62
+ "question_id": datasets.Value("string"),
63
+ "article_id": datasets.Value("string"),
64
+ "title": datasets.Value("string"),
65
+ "context": datasets.Value("string"),
66
+ "question": datasets.Value("string"),
67
+ "answers": datasets.features.Sequence(
68
+ {
69
+ "text": datasets.Value("string"),
70
+ "answer_start": datasets.Value("int32"),
71
+ "answer_end": datasets.Value("int32"),
72
+ }
73
+ ),
74
+ }
75
+ ),
76
+ # If there's a common (input, target) tuple from the features,
77
+ # specify them here. They'll be used if as_supervised=True in
78
+ # builder.as_dataset.
79
+ supervised_keys=None,
80
+ # Homepage of the dataset for documentation
81
+ homepage="https://github.com/iapp-technology/iapp-wiki-qa-dataset/",
82
+ citation=_CITATION,
83
+ )
84
+
85
+ def _split_generators(self, dl_manager):
86
+ arch_path = dl_manager.download_and_extract(self._DOWNLOAD_URL)
87
+ data_dir = os.path.join(arch_path, "data")
88
+ return [
89
+ datasets.SplitGenerator(
90
+ name=datasets.Split.TRAIN,
91
+ gen_kwargs={"filepath": os.path.join(data_dir, self._TRAIN_FILE)},
92
+ ),
93
+ datasets.SplitGenerator(
94
+ name=datasets.Split.VALIDATION,
95
+ gen_kwargs={"filepath": os.path.join(data_dir, self._VALID_FILE)},
96
+ ),
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.TEST,
99
+ gen_kwargs={"filepath": os.path.join(data_dir, self._TEST_FILE)},
100
+ ),
101
+ ]
102
+
103
+ def _generate_examples(self, filepath):
104
+ """Yields examples."""
105
+ with open(filepath, encoding="utf-8") as f:
106
+ for id_, row in enumerate(f):
107
+ data = json.loads(row)
108
+
109
+ yield id_, {
110
+ "question_id": data["question_id"],
111
+ "article_id": data["article_id"],
112
+ "title": data["title"],
113
+ "context": data["context"],
114
+ "question": data["question"],
115
+ "answers": {
116
+ "text": data["answers"]["text"],
117
+ "answer_start": data["answers"]["answer_start"],
118
+ "answer_end": data["answers"]["answer_end"],
119
+ },
120
+ }