Datasets:
lmqg
/

Languages:
Chinese
Multilinguality:
monolingual
Size Categories:
10K<n<100K
ArXiv:
License:
asahi417 commited on
Commit
40be2cd
1 Parent(s): eee0edd
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ pretty_name: Chinese SQuAD for question generation
4
+ language: zh
5
+ multilinguality: monolingual
6
+ size_categories: 10K<n<100K
7
+ task_categories:
8
+ - text-generation
9
+ task_ids:
10
+ - language-modeling
11
+ tags:
12
+ - question-generation
13
+ ---
14
+
15
+ # Dataset Card for "lmqg/qg_zhquad"
16
+
17
+ ## Dataset Description
18
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
19
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
20
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
21
+
22
+ ### Dataset Summary
23
+ This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
24
+ ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
25
+ This is a modified version of [Chinese SQuAD](https://github.com/junzeng-pluto/ChineseSquad) for question generation (QG) task.
26
+ Since the original dataset only contains training/validation set, we manually sample test set from training set, which
27
+ has no overlap in terms of the paragraph with the training set.
28
+
29
+ Please see the original repository ([https://github.com/junzeng-pluto/ChineseSquad](https://github.com/junzeng-pluto/ChineseSquad)) for more details.
30
+
31
+ ### Supported Tasks and Leaderboards
32
+ * `question-generation`: The dataset is assumed to be used to train a model for question generation.
33
+ Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
34
+
35
+ ### Languages
36
+ Italian (it)
37
+
38
+ ## Dataset Structure
39
+ The data fields are the same among all splits.
40
+ - `question`: a `string` feature.
41
+ - `paragraph`: a `string` feature.
42
+ - `answer`: a `string` feature.
43
+ - `sentence`: a `string` feature.
44
+ - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
45
+ - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
46
+ - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
47
+
48
+ Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
49
+ but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
50
+ `paragraph_sentence` feature is for sentence-aware question generation.
51
+
52
+ ## Data Splits
53
+
54
+ |train| validation | test |
55
+ |----:|-----------:|-----:|
56
+ |59977| 8236 | 8236 |
57
+
58
+
59
+ ## Citation Information
60
+
61
+ ```
62
+ @inproceedings{ushio-etal-2022-generative,
63
+ title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
64
+ author = "Ushio, Asahi and
65
+ Alva-Manchego, Fernando and
66
+ Camacho-Collados, Jose",
67
+ booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
68
+ month = dec,
69
+ year = "2022",
70
+ address = "Abu Dhabi, U.A.E.",
71
+ publisher = "Association for Computational Linguistics",
72
+ }
73
+ ```