Datasets:
lmqg
/

Modalities:
Text
Languages:
Japanese
ArXiv:
Libraries:
Datasets
License:
asahi417 commited on
Commit
8d351bd
1 Parent(s): b575ad7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,12 +14,12 @@ task_ids: question-generation
14
 
15
  ## Dataset Description
16
  - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
17
- - **Paper:** [TBA](TBA)
18
  - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
19
 
20
  ### Dataset Summary
21
  This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
22
- ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](paper_link).
23
  This is [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset compiled for question generation (QG) task. The test set of the original
24
  data is not publicly released, so we randomly sampled test questions from the training set. There are no overlap in terms of the paragraph across train, test, and validation split.
25
 
 
14
 
15
  ## Dataset Description
16
  - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
17
+ - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
18
  - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
19
 
20
  ### Dataset Summary
21
  This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
22
+ ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
23
  This is [JaQuAD](https://github.com/SkelterLabsInc/JaQuAD) dataset compiled for question generation (QG) task. The test set of the original
24
  data is not publicly released, so we randomly sampled test questions from the training set. There are no overlap in terms of the paragraph across train, test, and validation split.
25