Datasets:
lmqg
/

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
qag_tweetqa / README.md
asahi417's picture
Update README.md
147db0f
|
raw
history blame
1.99 kB
metadata
license: cc-by-sa-4.0
pretty_name: TweetQA for question generation
language: en
multilinguality: monolingual
size_categories: 1k<n<10K
source_datasets: tweet_qa
task_categories:
  - text-generation
task_ids:
  - language-modeling
tags:
  - question-generation

Dataset Card for "lmqg/qag_tweetqa"

Dataset Description

Dataset Summary

This is the question & answer generation dataset based on the tweet_qa. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.

Supported Tasks and Leaderboards

  • question-answer-generation: The dataset is assumed to be used to train a model for question & answer generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).

Languages

English (en)

Dataset Structure

An example of 'train' looks as follows.

{
  "paragraph": "I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015",
  "questions": [ "what should phylicia rashad do now?", "how many victims have come forward?" ],
  "answers": [ "apologize", "30" ],
  "questions_answers": "Q: what should phylicia rashad do now?, A: apologize Q: how many victims have come forward?, A: 30"
}

The data fields are the same among all splits.

  • questions: a list of string features.
  • answers: a list of string features.
  • paragraph: a string feature.
  • questions_answers: a string feature.

Data Splits

train validation test
4536 583 583

Citation Information

TBA