Datasets:
lmqg
/

Languages:
Russian
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Source Datasets:
deepset/germanquad
ArXiv:
Tags:
question-generation
License:
qg_ruquad / README.md
albertvillanova's picture
Fix task arrays (#2)
d5af190
metadata
license: cc-by-4.0
pretty_name: SberQuAD for question generation
language: ru
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: deepset/germanquad
task_categories:
  - text-generation
task_ids:
  - language-modeling
tags:
  - question-generation

Dataset Card for "lmqg/qg_ruquad"

Dataset Description

Dataset Summary

This is a subset of QG-Bench, a unified question generation benchmark proposed in "Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference". This is a modified version of SberQuaD for question generation (QG) task. Since the original dataset only contains training/validation set, we manually sample test set from training set, which has no overlap in terms of the paragraph with the training set.

Supported Tasks and Leaderboards

  • question-generation: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).

Languages

Russian (ru)

Dataset Structure

An example of 'train' looks as follows.

{
  'answer': 'известковыми выделениями сине-зелёных водорослей',                                                                                                                                                                   
  'question': 'чем представлены органические остатки?',  
  'sentence': 'Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных.'
  'paragraph': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены..."
  'sentence_answer': "Они представлены <hl> известковыми выделениями сине-зелёных водорослей <hl> , ход...",   
  'paragraph_answer': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. Они представлены <hl> известковыми выделениям...",
  'paragraph_sentence': "В протерозойских отложениях органические остатки встречаются намного чаще, чем в архейских. <hl> Они представлены известковыми выделениями сине-зелёных водорослей , ходами червей, остатками кишечнополостных. <hl> Кроме..."  
}

The data fields are the same among all splits.

  • question: a string feature.
  • paragraph: a string feature.
  • answer: a string feature.
  • sentence: a string feature.
  • paragraph_answer: a string feature, which is same as the paragraph but the answer is highlighted by a special token <hl>.
  • paragraph_sentence: a string feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token <hl>.
  • sentence_answer: a string feature, which is same as the sentence but the answer is highlighted by a special token <hl>.

Each of paragraph_answer, paragraph_sentence, and sentence_answer feature is assumed to be used to train a question generation model, but with different information. The paragraph_answer and sentence_answer features are for answer-aware question generation and paragraph_sentence feature is for sentence-aware question generation.

Data Splits

train validation test
45327 5036 23936

Citation Information

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}