Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
Korean
Size:
10K - 100K
ArXiv:
License:
File size: 6,122 Bytes
7d6584e d4f5558 7d6584e d4f5558 7d6584e 8620ea6 c33a9bd eaf72d7 7d6584e 8620ea6 7d6584e 8620ea6 7d6584e 8b62734 7d6584e 0ec395f 7d6584e 8b62734 eaf72d7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ko
license:
- cc-by-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: korquad
pretty_name: The Korean Question Answering Dataset
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: squad_kor_v1
splits:
- name: train
num_bytes: 83380337
num_examples: 60407
- name: validation
num_bytes: 8261729
num_examples: 5774
download_size: 42408533
dataset_size: 91642066
---
# Dataset Card for KorQuAD v1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://korquad.github.io/KorQuad%201.0/
- **Repository:** https://github.com/korquad/korquad.github.io/tree/master/dataset
- **Paper:** https://arxiv.org/abs/1909.07005
### Dataset Summary
KorQuAD 1.0 is a large-scale question-and-answer dataset constructed for Korean machine reading comprehension, and investigate the dataset to understand the distribution of answers and the types of reasoning required to answer the question. This dataset benchmarks the data generating process of SQuAD v1.0 to meet the standard.
### Supported Tasks and Leaderboards
`question-answering`
### Languages
Korean
## Dataset Structure
Follows the standars SQuAD format.
### Data Instances
An example from the data set looks as follows:
```
{'answers': {'answer_start': [54], 'text': ['κ΅ν₯곑']},
'context': '1839λ
λ°κ·Έλλ κ΄΄ν
μ νμ°μ€νΈμ μ²μ μ½κ³ κ·Έ λ΄μ©μ λ§μμ΄ λλ € μ΄λ₯Ό μμ¬λ‘ ν΄μ νλμ κ΅ν₯곑μ μ°λ €λ λ»μ κ°λλ€. μ΄ μκΈ° λ°κ·Έλλ 1838λ
μ λΉ λ
μ΄μΌλ‘ μ°μ μμ μ λ€ κ±²μ μν©μ΄λΌ μ’μ κ³Ό μ€λ§μ κ°λνμΌλ©° λ©νΌμ€ν ν λ μ€λ₯Ό λ§λλ νμ°μ€νΈμ μ¬κ²½μ 곡κ°νλ€κ³ νλ€. λν ν리μμ μλΈλ€ν¬μ μ§νλ‘ ν리 μμ
μ κ΄νμ
λ¨μ΄ μ°μ£Όνλ λ² ν λ²€μ κ΅ν₯곑 9λ²μ λ£κ³ κΉμ κ°λͺ
μ λ°μλλ°, μ΄κ²μ΄ μ΄λ¬ν΄ 1μμ νμ°μ€νΈμ μ곑μΌλ‘ μ°μ¬μ§ μ΄ μνμ μ‘°κΈμ΄λΌλ μν₯μ λΌμ³€μΌλ¦¬λΌλ κ²μ μμ¬ν μ¬μ§κ° μλ€. μ¬κΈ°μ λΌλ¨μ‘° μ‘°μ±μ κ²½μ°μλ κ·Έμ μ κΈ°μ μ ν μλ κ²μ²λΌ λ¨μν μ μ μ νΌλ‘λ μ€μκ° λ°μλ κ²μ΄ μλλΌ λ² ν λ²€μ ν©μ°½κ΅ν₯곑 μ‘°μ±μ μν₯μ λ°μ κ²μ λ³Ό μ μλ€. κ·Έλ κ² κ΅ν₯곑 μ곑μ 1839λ
λΆν° 40λ
μ κ±Έμ³ ν리μμ μ°©μνμΌλ 1μ
μ₯μ μ΄ λ€μ μ€λ¨νλ€. λν μνμ μμ±κ³Ό λμμ κ·Έλ μ΄ μ곑(1μ
μ₯)μ ν리 μμ
μμ μ°μ£Όνμμ μ°μ£Όν ννΈλ³΄κΉμ§ μ€λΉνμμΌλ, μ€μ λ‘λ μ΄λ£¨μ΄μ§μ§λ μμλ€. κ²°κ΅ μ΄μ°μ 4λ
λ°μ΄ μ§λ νμ λλ μ€λ΄μμ μ°μ£Όλμκ³ μ¬μ°λ μ΄λ£¨μ΄μ‘μ§λ§, μ΄νμ κ·Έλλ‘ λ°©μΉλκ³ λ§μλ€. κ·Έ μ¬μ΄μ κ·Έλ 리μμΉμ λ°©ν©νλ λ€λλλμΈμ μμ±νκ³ ννΈμ΄μ μλ μ°©μνλ λ± λΆμ£Όν μκ°μ 보λλλ°, κ·Έλ° λ°μ μνμ΄ μ΄ κ³‘μ μκ² ν κ²μ΄ μλκ° νλ μ견λ μλ€.',
'id': '6566495-0-0',
'question': 'λ°κ·Έλλ κ΄΄ν
μ νμ°μ€νΈλ₯Ό μ½κ³ 무μμ μ°κ³ μ νλκ°?',
'title': 'νμ°μ€νΈ_μ곑'}
```
### Data Fields
```
{'id': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'context': Value(dtype='string', id=None),
'question': Value(dtype='string', id=None),
'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)}
```
### Data Splits
- Train: 60407
- Validation: 5774
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Wikipedia
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC BY-ND 2.0 KR](https://creativecommons.org/licenses/by-nd/2.0/kr/deed.en)
### Citation Information
```
@article{lim2019korquad1,
title={Korquad1. 0: Korean qa dataset for machine reading comprehension},
author={Lim, Seungyoung and Kim, Myungji and Lee, Jooyoul},
journal={arXiv preprint arXiv:1909.07005},
year={2019}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset. |