Datasets:

Sub-tasks:
extractive-qa
Languages:
Korean
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
File size: 6,131 Bytes
7d6584e
 
 
 
 
d4f5558
7d6584e
d4f5558
 
7d6584e
 
 
 
 
 
 
 
 
 
8620ea6
c33a9bd
eaf72d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d6584e
 
 
 
 
 
 
8620ea6
7d6584e
 
 
8620ea6
 
7d6584e
 
 
 
 
 
 
 
 
 
 
 
 
8b62734
7d6584e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b62734
 
 
eaf72d7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ko
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: korquad
pretty_name: The Korean Question Answering Dataset
dataset_info:
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    sequence:
    - name: text
      dtype: string
    - name: answer_start
      dtype: int32
  config_name: squad_kor_v1
  splits:
  - name: train
    num_bytes: 83380337
    num_examples: 60407
  - name: validation
    num_bytes: 8261729
    num_examples: 5774
  download_size: 42408533
  dataset_size: 91642066
---

# Dataset Card for KorQuAD v1.0

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- [**Homepage:**](https://korquad.github.io/KorQuad%201.0/)
- [**Repository:**](https://github.com/korquad/korquad.github.io/tree/master/dataset)
- [**Paper:**](https://arxiv.org/abs/1909.07005)

### Dataset Summary

KorQuAD 1.0 is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD v1.0 to meet the standard.

### Supported Tasks and Leaderboards

`question-answering`

### Languages

Korean

## Dataset Structure

Follows the standars SQuAD format.

### Data Instances

An example from the data set looks as follows:
```
{'answers': {'answer_start': [54], 'text': ['ꡐν–₯곑']},
 'context': '1839λ…„ λ°”κ·Έλ„ˆλŠ” κ΄΄ν…Œμ˜ νŒŒμš°μŠ€νŠΈμ„ 처음 읽고 κ·Έ λ‚΄μš©μ— 마음이 끌렀 이λ₯Ό μ†Œμž¬λ‘œ ν•΄μ„œ ν•˜λ‚˜μ˜ ꡐν–₯곑을 μ“°λ €λŠ” λœ»μ„ κ°–λŠ”λ‹€. 이 μ‹œκΈ° λ°”κ·Έλ„ˆλŠ” 1838년에 λΉ› λ…μ΄‰μœΌλ‘œ μ‚°μ „μˆ˜μ „μ„ λ‹€ 걲은 상황이라 쒌절과 싀망에 κ°€λ“ν–ˆμœΌλ©° λ©”ν”ΌμŠ€ν† νŽ λ ˆμŠ€λ₯Ό λ§Œλ‚˜λŠ” 파우슀트의 심경에 κ³΅κ°ν–ˆλ‹€κ³  ν•œλ‹€. λ˜ν•œ νŒŒλ¦¬μ—μ„œ μ•„λΈŒλ„€ν¬μ˜ μ§€νœ˜λ‘œ 파리 μŒμ•…μ› κ΄€ν˜„μ•…λ‹¨μ΄ μ—°μ£Όν•˜λŠ” λ² ν† λ²€μ˜ ꡐν–₯곑 9λ²ˆμ„ λ“£κ³  κΉŠμ€ 감λͺ…을 λ°›μ•˜λŠ”λ°, 이것이 이듬해 1월에 파우슀트의 μ„œκ³‘μœΌλ‘œ 쓰여진 이 μž‘ν’ˆμ— μ‘°κΈˆμ΄λΌλ„ 영ν–₯을 λΌμ³€μœΌλ¦¬λΌλŠ” 것은 μ˜μ‹¬ν•  여지가 μ—†λ‹€. μ—¬κΈ°μ˜ 라단쑰 μ‘°μ„±μ˜ κ²½μš°μ—λ„ 그의 전기에 μ ν˜€ μžˆλŠ” κ²ƒμ²˜λŸΌ λ‹¨μˆœν•œ 정신적 ν”Όλ‘œλ‚˜ μ‹€μ˜κ°€ 반영된 것이 μ•„λ‹ˆλΌ λ² ν† λ²€μ˜ 합창ꡐν–₯곑 μ‘°μ„±μ˜ 영ν–₯을 받은 것을 λ³Ό 수 μžˆλ‹€. κ·Έλ ‡κ²Œ ꡐν–₯곑 μž‘κ³‘μ„ 1839λ…„λΆ€ν„° 40년에 걸쳐 νŒŒλ¦¬μ—μ„œ μ°©μˆ˜ν–ˆμœΌλ‚˜ 1μ•…μž₯을 μ“΄ 뒀에 μ€‘λ‹¨ν–ˆλ‹€. λ˜ν•œ μž‘ν’ˆμ˜ μ™„μ„±κ³Ό λ™μ‹œμ— κ·ΈλŠ” 이 μ„œκ³‘(1μ•…μž₯)을 파리 μŒμ•…μ›μ˜ μ—°μ£ΌνšŒμ—μ„œ μ—°μ£Όν•  νŒŒνŠΈλ³΄κΉŒμ§€ μ€€λΉ„ν•˜μ˜€μœΌλ‚˜, μ‹€μ œλ‘œλŠ” μ΄λ£¨μ–΄μ§€μ§€λŠ” μ•Šμ•˜λ‹€. κ²°κ΅­ μ΄ˆμ—°μ€ 4λ…„ 반이 μ§€λ‚œ 후에 λ“œλ ˆμŠ€λ΄μ—μ„œ μ—°μ£Όλ˜μ—ˆκ³  μž¬μ—°λ„ μ΄λ£¨μ–΄μ‘Œμ§€λ§Œ, 이후에 κ·ΈλŒ€λ‘œ 방치되고 λ§μ•˜λ‹€. κ·Έ 사이에 κ·ΈλŠ” λ¦¬μ—”μΉ˜μ™€ λ°©ν™©ν•˜λŠ” λ„€λœλž€λ“œμΈμ„ μ™„μ„±ν•˜κ³  νƒ„ν˜Έμ΄μ €μ—λ„ μ°©μˆ˜ν•˜λŠ” λ“± λΆ„μ£Όν•œ μ‹œκ°„μ„ λ³΄λƒˆλŠ”λ°, 그런 λ°”μœ μƒν™œμ΄ 이 곑을 잊게 ν•œ 것이 μ•„λ‹Œκ°€ ν•˜λŠ” μ˜κ²¬λ„ μžˆλ‹€.',
 'id': '6566495-0-0',
 'question': 'λ°”κ·Έλ„ˆλŠ” κ΄΄ν…Œμ˜ 파우슀트λ₯Ό 읽고 무엇을 μ“°κ³ μž ν–ˆλŠ”κ°€?',
 'title': '파우슀트_μ„œκ³‘'}
```

### Data Fields
```
{'id': Value(dtype='string', id=None),
 'title': Value(dtype='string', id=None),
 'context': Value(dtype='string', id=None),
 'question': Value(dtype='string', id=None),
 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}
```
### Data Splits

- Train: 60407
- Validation: 5774


## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

Wikipedia

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

[CC BY-ND 2.0 KR](https://creativecommons.org/licenses/by-nd/2.0/kr/deed.en)

### Citation Information
```
@article{lim2019korquad1,
  title={Korquad1. 0: Korean qa dataset for machine reading comprehension},
  author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},
  journal={arXiv preprint arXiv:1909.07005},
  year={2019}
```

### Contributions

Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.