Datasets:
license: cc-by-4.0
pretty_name: SberQuAD for question generation
language: ru
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: deepset/germanquad
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
Dataset Card for "lmqg/qg_ruquad"
Dataset Description
- Repository: https://github.com/asahi417/lm-question-generation
- Paper: https://arxiv.org/abs/2210.03992
- Point of Contact: Asahi Ushio
Dataset Summary
This is a subset of QG-Bench, a unified question generation benchmark proposed in "Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference". This is a modified version of SberQuaD for question generation (QG) task. Since the original dataset only contains training/validation set, we manually sample test set from training set, which has no overlap in terms of the paragraph with the training set.
Supported Tasks and Leaderboards
question-generation
: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
Languages
Russian (ru)
Dataset Structure
An example of 'train' looks as follows.
{
'answer': 'известковыми выделениями сине-зелёных водорослей',
'question': 'чем представлены органические остатки?',
'sentence': 'Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных.'
'paragraph': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены..."
'sentence_answer': "Они представлены <hl> известковыми выделениями сине-зелёных водорослей <hl> , ход...",
'paragraph_answer': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены <hl> известковыми выделениям...",
'paragraph_sentence': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. <hl> Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных. <hl> Кроме..."
}
The data fields are the same among all splits.
question
: astring
feature.paragraph
: astring
feature.answer
: astring
feature.sentence
: astring
feature.paragraph_answer
: astring
feature, which is same as the paragraph but the answer is highlighted by a special token<hl>
.paragraph_sentence
: astring
feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token<hl>
.sentence_answer
: astring
feature, which is same as the sentence but the answer is highlighted by a special token<hl>
.
Each of paragraph_answer
, paragraph_sentence
, and sentence_answer
feature is assumed to be used to train a question generation model,
but with different information. The paragraph_answer
and sentence_answer
features are for answer-aware question generation and
paragraph_sentence
feature is for sentence-aware question generation.
Data Splits
train | validation | test |
---|---|---|
45327 | 5036 | 23936 |
Citation Information
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}