File size: 6,504 Bytes
bfd9fce
35a043f
bfd9fce
4a0cc62
bfd9fce
 
 
 
 
 
 
4486f16
bfd9fce
 
 
1113a5f
bfd9fce
 
 
4486f16
1113a5f
4486f16
bfd9fce
 
 
 
4486f16
 
 
bfd9fce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4486f16
bfd9fce
 
 
 
 
 
 
 
 
 
 
 
 
4486f16
 
 
 
 
bfd9fce
 
4486f16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: cc-by-4.0
pretty_name: KorQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad_es
task_categories: question-generation
task_ids: question-generation
---

# Dataset Card for "lmqg/qg_korquad"

## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)

### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
 ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [KorQuAD](https://huggingface.co/datasets/squad_kor_v1) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.

### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
  Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
  
### Languages
Korean (ko)

## Dataset Structure
An example of 'train' looks as follows.
```
{
  "question": "ν•¨μˆ˜ν•΄μ„ν•™μ΄ μ£Όλͺ©ν•˜λŠ” νƒκ΅¬λŠ”?",
  "paragraph": "변화에 λŒ€ν•œ 이해와 λ¬˜μ‚¬λŠ” μžμ—°κ³Όν•™μ— μžˆμ–΄μ„œ 일반적인 주제이며, 미적뢄학은 λ³€ν™”λ₯Ό νƒκ΅¬ν•˜λŠ” κ°•λ ₯ν•œ λ„κ΅¬λ‘œμ„œ λ°œμ „λ˜μ—ˆλ‹€. ν•¨μˆ˜λŠ” λ³€ν™”ν•˜λŠ” 양을 λ¬˜μ‚¬ν•¨μ— μžˆμ–΄μ„œ 쀑좔적인 κ°œλ…μœΌλ‘œμ¨ λ– μ˜€λ₯΄κ²Œ λœλ‹€. μ‹€μˆ˜μ™€ μ‹€λ³€μˆ˜λ‘œ κ΅¬μ„±λœ ν•¨μˆ˜μ˜ μ—„λ°€ν•œ 탐ꡬ가 μ‹€ν•΄μ„ν•™μ΄λΌλŠ” λΆ„μ•Όλ‘œ μ•Œλ €μ§€κ²Œ λ˜μ—ˆκ³ , λ³΅μ†Œμˆ˜μ— λŒ€ν•œ 이와 같은 νƒκ΅¬λΆ„μ•ΌλŠ” λ³΅μ†Œν•΄μ„ν•™μ΄λΌκ³  ν•œλ‹€. ν•¨μˆ˜ν•΄μ„ν•™μ€ ν•¨μˆ˜μ˜ 곡간(특히 λ¬΄ν•œμ°¨μ›)의 탐ꡬ에 μ£Όλͺ©ν•œλ‹€. ν•¨μˆ˜ν•΄μ„ν•™μ˜ λ§Žμ€ μ‘μš©λΆ„μ•Ό 쀑 ν•˜λ‚˜κ°€ μ–‘μžμ—­ν•™μ΄λ‹€. λ§Žμ€ λ¬Έμ œλ“€μ΄ μžμ—°μŠ€λŸ½κ²Œ μ–‘κ³Ό κ·Έ μ–‘μ˜ λ³€ν™”μœ¨μ˜ κ΄€κ³„λ‘œ κ·€μ°©λ˜κ³ , μ΄λŸ¬ν•œ λ¬Έμ œλ“€μ΄ λ―ΈλΆ„λ°©μ •μ‹μœΌλ‘œ 닀루어진닀. μžμ—°μ˜ λ§Žμ€ ν˜„μƒλ“€μ΄ λ™μ—­ν•™κ³„λ‘œ 기술될 수 μžˆλ‹€. 혼돈 이둠은 μ΄λŸ¬ν•œ 예츑 λΆˆκ°€λŠ₯ν•œ ν˜„μƒμ„ νƒκ΅¬ν•˜λŠ” 데 μƒλ‹Ήν•œ κΈ°μ—¬λ₯Ό ν•œλ‹€.",
  "answer": "ν•¨μˆ˜μ˜ 곡간(특히 λ¬΄ν•œμ°¨μ›)의 탐ꡬ",
  "sentence": "ν•¨μˆ˜ν•΄μ„ν•™μ€ ν•¨μˆ˜μ˜ 곡간(특히 λ¬΄ν•œμ°¨μ›)의 탐ꡬ 에 μ£Όλͺ©ν•œλ‹€.",
  "paragraph_sentence": '변화에 λŒ€ν•œ 이해와 λ¬˜μ‚¬λŠ” μžμ—°κ³Όν•™μ— μžˆμ–΄μ„œ 일반적인 주제이며, 미적뢄학은 λ³€ν™”λ₯Ό νƒκ΅¬ν•˜λŠ” κ°•λ ₯ν•œ λ„κ΅¬λ‘œμ„œ λ°œμ „λ˜μ—ˆλ‹€. ν•¨μˆ˜λŠ” λ³€ν™”ν•˜λŠ” 양을 λ¬˜μ‚¬ν•¨μ— μžˆμ–΄μ„œ 쀑좔적인 κ°œλ…μœΌλ‘œμ¨ λ– μ˜€λ₯΄κ²Œ λœλ‹€. μ‹€μˆ˜μ™€ μ‹€λ³€μˆ˜λ‘œ κ΅¬μ„±λœ ν•¨μˆ˜μ˜ μ—„λ°€ν•œ 탐ꡬ가 μ‹€ν•΄μ„ν•™μ΄λΌλŠ” λΆ„μ•Όλ‘œ μ•Œλ €μ§€κ²Œ λ˜μ—ˆκ³ , λ³΅μ†Œμˆ˜μ— λŒ€ν•œ 이와 같은 탐ꡬ λΆ„μ•ΌλŠ” λ³΅μ†Œν•΄μ„ν•™μ΄λΌκ³  ν•œλ‹€. <hl> ν•¨μˆ˜ν•΄μ„ν•™μ€ ν•¨μˆ˜μ˜ 곡간(특히 λ¬΄ν•œμ°¨μ›)의 탐ꡬ 에 μ£Όλͺ©ν•œλ‹€. <hl> ν•¨μˆ˜ν•΄μ„ν•™μ˜ λ§Žμ€ μ‘μš©λΆ„μ•Ό 쀑 ν•˜λ‚˜κ°€ μ–‘μžμ—­ν•™μ΄λ‹€. λ§Žμ€ λ¬Έμ œλ“€μ΄ μžμ—°μŠ€λŸ½κ²Œ μ–‘κ³Ό κ·Έ μ–‘μ˜ λ³€ν™”μœ¨μ˜ κ΄€κ³„λ‘œ κ·€μ°©λ˜κ³ , μ΄λŸ¬ν•œ λ¬Έμ œλ“€μ΄ λ―ΈλΆ„λ°©μ •μ‹μœΌλ‘œ 닀루어진닀. μžμ—°μ˜ λ§Žμ€ ν˜„μƒλ“€μ΄ λ™μ—­ν•™κ³„λ‘œ 기술될 수 μžˆλ‹€. 혼돈 이둠은 μ΄λŸ¬ν•œ 예츑 λΆˆκ°€λŠ₯ν•œ ν˜„μƒμ„ νƒκ΅¬ν•˜λŠ” 데 μƒλ‹Ήν•œ κΈ°μ—¬λ₯Ό ν•œλ‹€.',
  "paragraph_answer": '변화에 λŒ€ν•œ 이해와 λ¬˜μ‚¬λŠ” μžμ—°κ³Όν•™μ— μžˆμ–΄μ„œ 일반적인 주제이며, 미적뢄학은 λ³€ν™”λ₯Ό νƒκ΅¬ν•˜λŠ” κ°•λ ₯ν•œ λ„κ΅¬λ‘œμ„œ λ°œμ „λ˜μ—ˆλ‹€. ν•¨μˆ˜λŠ” λ³€ν™”ν•˜λŠ” 양을 λ¬˜μ‚¬ν•¨μ— μžˆμ–΄μ„œ 쀑좔적인 κ°œλ…μœΌλ‘œμ¨ λ– μ˜€λ₯΄κ²Œ λœλ‹€. μ‹€μˆ˜μ™€ μ‹€λ³€μˆ˜λ‘œ κ΅¬μ„±λœ ν•¨μˆ˜μ˜ μ—„λ°€ν•œ 탐ꡬ가 μ‹€ν•΄μ„ν•™μ΄λΌλŠ” λΆ„μ•Όλ‘œ μ•Œλ €μ§€κ²Œ λ˜μ—ˆκ³ , λ³΅μ†Œμˆ˜μ— λŒ€ν•œ 이와 같은 탐ꡬ λΆ„μ•ΌλŠ” λ³΅μ†Œν•΄μ„ν•™μ΄λΌκ³  ν•œλ‹€. ν•¨μˆ˜ν•΄μ„ν•™μ€ <hl> ν•¨μˆ˜μ˜ 곡간(특히 λ¬΄ν•œμ°¨μ›)의 탐ꡬ <hl>에 μ£Όλͺ©ν•œλ‹€. ν•¨μˆ˜ν•΄μ„ν•™μ˜ λ§Žμ€ μ‘μš©λΆ„μ•Ό 쀑 ν•˜λ‚˜κ°€ μ–‘μžμ—­ν•™μ΄λ‹€. λ§Žμ€ λ¬Έμ œλ“€μ΄ μžμ—°μŠ€λŸ½κ²Œ μ–‘κ³Ό κ·Έ μ–‘μ˜ λ³€ν™”μœ¨μ˜ κ΄€κ³„λ‘œ κ·€μ°©λ˜κ³ , μ΄λŸ¬ν•œ λ¬Έμ œλ“€μ΄ λ―ΈλΆ„λ°©μ •μ‹μœΌλ‘œ 닀루어진닀. μžμ—°μ˜ λ§Žμ€ ν˜„μƒλ“€μ΄ λ™μ—­ν•™κ³„λ‘œ 기술될 수 μžˆλ‹€. 혼돈 이둠은 μ΄λŸ¬ν•œ 예츑 λΆˆκ°€λŠ₯ν•œ ν˜„μƒμ„ νƒκ΅¬ν•˜λŠ” 데 μƒλ‹Ήν•œ κΈ°μ—¬λ₯Ό ν•œλ‹€.',
  "sentence_answer": "ν•¨μˆ˜ν•΄μ„ν•™μ€ <hl> ν•¨μˆ˜μ˜ 곡간(특히 λ¬΄ν•œμ°¨μ›)의 탐ꡬ <hl> 에 μ£Όλͺ©ν•œλ‹€."
}
```

The data fields are the same among all splits.
- `question`: a `string` feature. 
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.

Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and 
`paragraph_sentence` feature is for sentence-aware question generation.

## Data Splits

|train|validation|test |
|----:|---------:|----:|
|54556|     5766 |5766 |


## Citation Information

```
@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
```