system HF staff commited on
Commit
7d6584e
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ko
8
+ licenses:
9
+ - cc-by-nd-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - extractive-qa
20
+ ---
21
+
22
+ # Dataset Card for KorQuAD v1.0
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-instances)
32
+ - [Data Splits](#data-instances)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+
47
+ ## Dataset Description
48
+
49
+ - [**Homepage:**](https://korquad.github.io/KorQuad%201.0/)
50
+ - [**Repository:**](https://github.com/korquad/korquad.github.io/tree/master/dataset)
51
+ - [**Paper:**](https://arxiv.org/abs/1909.07005)
52
+
53
+ ### Dataset Summary
54
+
55
+ KorQuAD 1.0 is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD v1.0 to meet the standard.
56
+
57
+ ### Supported Tasks and Leaderboards
58
+
59
+ `question-answering`
60
+
61
+ ### Languages
62
+
63
+ Korean
64
+
65
+ ## Dataset Structure
66
+
67
+ Follows the standars SQuAD format.
68
+
69
+ ### Data Instances
70
+
71
+ An example from the data set looks as follows:
72
+ ```
73
+ {'answers': {'answer_start': [54], 'text': ['교향곡']},
74
+ 'context': '1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이 끌려 이를 소재로 해서 하나의 교향곡을 쓰려는 뜻을 갖는다. 이 시기 바그너는 1838년에 빛 독촉으로 산전수전을 다 걲은 상황이라 좌절과 실망에 가득했으며 메피스토펠레스를 만나는 파우스트의 심경에 공감했다고 한다. 또한 파리에서 아브네크의 지휘로 파리 음악원 관현악단이 연주하는 베토벤의 교향곡 9번을 듣고 깊은 감명을 받았는데, 이것이 이듬해 1월에 파우스트의 서곡으로 쓰여진 이 작품에 조금이라도 영향을 끼쳤으리라는 것은 의심할 여지가 없다. 여기의 라단조 조성의 경우에도 그의 전기에 적혀 있는 것처럼 단순한 정신적 피로나 실의가 반영된 것이 아니라 베토벤의 합창교향곡 조성의 영향을 받은 것을 볼 수 있다. 그렇게 교향곡 작곡을 1839년부터 40년에 걸쳐 파리에서 착수했으나 1악장을 쓴 뒤에 중단했다. 또한 작품의 완성과 동시에 그는 이 서곡(1악장)을 파리 음악원의 연주회에서 연주할 파트보까지 준비하였으나, 실제로는 이루어지지는 않았다. 결국 초연은 4년 반이 지난 후에 드레스덴에서 연주되었고 재연도 이루어졌지만, 이후에 그대로 방치되고 말았다. 그 사이에 그는 리엔치와 방황하는 네덜란드인을 완성하고 탄호이저에도 착수하는 등 분주한 시간을 보냈는데, 그런 바쁜 생활이 이 곡을 잊게 한 것이 아닌가 하는 의견도 있다.',
75
+ 'id': '6566495-0-0',
76
+ 'question': '바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?',
77
+ 'title': '파우스트_서곡'}
78
+ ```
79
+
80
+ ### Data Fields
81
+ ```
82
+ {'id': Value(dtype='string', id=None),
83
+ 'title': Value(dtype='string', id=None),
84
+ 'context': Value(dtype='string', id=None),
85
+ 'question': Value(dtype='string', id=None),
86
+ 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}
87
+ ```
88
+ ### Data Splits
89
+
90
+ - Train: 60407
91
+ - Validation: 5774
92
+
93
+
94
+ ## Dataset Creation
95
+
96
+ ### Curation Rationale
97
+
98
+ [More Information Needed]
99
+
100
+ ### Source Data
101
+
102
+ Wikipedia
103
+
104
+ #### Initial Data Collection and Normalization
105
+
106
+ [More Information Needed]
107
+
108
+ #### Who are the source language producers?
109
+
110
+ [More Information Needed]
111
+
112
+ ### Annotations
113
+
114
+ #### Annotation process
115
+
116
+ [More Information Needed]
117
+
118
+ #### Who are the annotators?
119
+
120
+ [More Information Needed]
121
+
122
+ ### Personal and Sensitive Information
123
+
124
+ [More Information Needed]
125
+
126
+ ## Considerations for Using the Data
127
+
128
+ ### Social Impact of Dataset
129
+
130
+ [More Information Needed]
131
+
132
+ ### Discussion of Biases
133
+
134
+ [More Information Needed]
135
+
136
+ ### Other Known Limitations
137
+
138
+ [More Information Needed]
139
+
140
+ ## Additional Information
141
+
142
+ ### Dataset Curators
143
+
144
+ [More Information Needed]
145
+
146
+ ### Licensing Information
147
+
148
+ [CC BY-ND 2.0 KR](https://creativecommons.org/licenses/by-nd/2.0/kr/deed.en)
149
+
150
+ ### Citation Information
151
+ ```
152
+ @article{lim2019korquad1,
153
+ title={Korquad1. 0: Korean qa dataset for machine reading comprehension},
154
+ author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},
155
+ journal={arXiv preprint arXiv:1909.07005},
156
+ year={2019}
157
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"squad_kor_v1": {"description": "KorQuAD 1.0 is a large-scale Korean dataset for machine reading comprehension task consisting of human generated questions for Wikipedia articles. We benchmark the data collecting process of SQuADv1.0 and crowdsourced 70,000+ question-answer pairs. 1,637 articles and 70,079 pairs of question answers were collected. 1,420 articles are used for the training set, 140 for the dev set, and 77 for the test set. 60,407 question-answer pairs are for the training set, 5,774 for the dev set, and 3,898 for the test set.\n", "citation": "@article{lim2019korquad1,\n title={Korquad1. 0: Korean qa dataset for machine reading comprehension},\n author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},\n journal={arXiv preprint arXiv:1909.07005},\n year={2019}\n}\n", "homepage": "https://korquad.github.io/KorQuad%201.0/", "license": "CC BY-ND 2.0 KR", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "context": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"feature": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "answer_start": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "squad_kor_v1", "config_name": "squad_kor_v1", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 83380337, "num_examples": 60407, "dataset_name": "squad_kor_v1"}, "validation": {"name": "validation", "num_bytes": 8261729, "num_examples": 5774, "dataset_name": "squad_kor_v1"}}, "download_checksums": {"https://korquad.github.io/dataset/KorQuAD_v1.0_train.json": {"num_bytes": 38527475, "checksum": "40d5115879a701751781df721d901abfa736d8db5f89000f2619433f39bf2dd2"}, "https://korquad.github.io/dataset/KorQuAD_v1.0_dev.json": {"num_bytes": 3881058, "checksum": "25ffeb51e6c51ec02c071b60a10188e10005c144110f0d876b26079d80a35bdf"}}, "download_size": 42408533, "post_processing_size": null, "dataset_size": 91642066, "size_in_bytes": 134050599}}
dummy/squad_kor_v1/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1dbcc2932d6783fdf004c1dd40d86c3ec6c488f264a2c4b02e3ba748e03439f
3
+ size 32275
squad_kor_v1.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """KorQuAD v1.0:The Korean Question Answering Dataset"""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @article{lim2019korquad1,
27
+ title={Korquad1. 0: Korean qa dataset for machine reading comprehension},
28
+ author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},
29
+ journal={arXiv preprint arXiv:1909.07005},
30
+ year={2019}
31
+ }
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ KorQuAD 1.0 is a large-scale Korean dataset for machine reading comprehension task consisting of human generated questions for Wikipedia articles. We benchmark the data collecting process of SQuADv1.0 and crowdsourced 70,000+ question-answer pairs. 1,637 articles and 70,079 pairs of question answers were collected. 1,420 articles are used for the training set, 140 for the dev set, and 77 for the test set. 60,407 question-answer pairs are for the training set, 5,774 for the dev set, and 3,898 for the test set.
36
+ """
37
+ _HOMEPAGE = "https://korquad.github.io/KorQuad%201.0/"
38
+ _LICENSE = "CC BY-ND 2.0 KR"
39
+
40
+ _URL = "https://korquad.github.io/dataset/"
41
+ _URLS = {
42
+ "train": _URL + "KorQuAD_v1.0_train.json",
43
+ "dev": _URL + "KorQuAD_v1.0_dev.json",
44
+ }
45
+
46
+
47
+ class SquadKorV1(datasets.GeneratorBasedBuilder):
48
+ """KorQuAD 1.0 dataset"""
49
+
50
+ VERSION = datasets.Version("1.0.0")
51
+ BUILDER_CONFIGS = [
52
+ datasets.BuilderConfig(
53
+ name="squad_kor_v1",
54
+ version=VERSION,
55
+ description=_DESCRIPTION,
56
+ ),
57
+ ]
58
+
59
+ def _info(self):
60
+ return datasets.DatasetInfo(
61
+ description=_DESCRIPTION,
62
+ features=datasets.Features(
63
+ {
64
+ "id": datasets.Value("string"),
65
+ "title": datasets.Value("string"),
66
+ "context": datasets.Value("string"),
67
+ "question": datasets.Value("string"),
68
+ "answers": datasets.features.Sequence(
69
+ {
70
+ "text": datasets.Value("string"),
71
+ "answer_start": datasets.Value("int32"),
72
+ }
73
+ ),
74
+ }
75
+ ),
76
+ supervised_keys=None,
77
+ homepage=_HOMEPAGE,
78
+ license=_LICENSE,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ """Returns SplitGenerators."""
84
+ # download and extract URLs
85
+ urls_to_download = _URLS
86
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
87
+
88
+ return [
89
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
90
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
91
+ ]
92
+
93
+ def _generate_examples(self, filepath):
94
+ """Yields examples."""
95
+ with open(filepath, encoding="utf-8") as f:
96
+ squad = json.load(f)
97
+ for example in squad["data"]:
98
+ title = example.get("title", "").strip()
99
+ for paragraph in example["paragraphs"]:
100
+ context = paragraph["context"].strip()
101
+ for qa in paragraph["qas"]:
102
+ question = qa["question"].strip()
103
+ id_ = qa["id"]
104
+
105
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
106
+ answers = [answer["text"].strip() for answer in qa["answers"]]
107
+
108
+ yield id_, {
109
+ "title": title,
110
+ "context": context,
111
+ "question": question,
112
+ "id": id_,
113
+ "answers": {
114
+ "answer_start": answer_starts,
115
+ "text": answers,
116
+ },
117
+ }