Edit model card

Model Card of research-backup/t5-large-squadshifts-vanilla-nyt-qg

This model is fine-tuned version of t5-large for question generation task on the lmqg/qg_squadshifts (dataset_name: nyt) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="research-backup/t5-large-squadshifts-vanilla-nyt-qg")

# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "research-backup/t5-large-squadshifts-vanilla-nyt-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

Evaluation

Score Type Dataset
BERTScore 92.26 nyt lmqg/qg_squadshifts
Bleu_1 23.22 nyt lmqg/qg_squadshifts
Bleu_2 15.19 nyt lmqg/qg_squadshifts
Bleu_3 10.59 nyt lmqg/qg_squadshifts
Bleu_4 7.69 nyt lmqg/qg_squadshifts
METEOR 23.29 nyt lmqg/qg_squadshifts
MoverScore 63.63 nyt lmqg/qg_squadshifts
ROUGE_L 23.3 nyt lmqg/qg_squadshifts

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_squadshifts
  • dataset_name: nyt
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: ['qg']
  • model: t5-large
  • max_length: 512
  • max_length_output: 32
  • epoch: 6
  • batch: 16
  • lr: 0.0001
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 4
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
9

Dataset used to train research-backup/t5-large-squadshifts-vanilla-nyt-qg

Evaluation results

  • BLEU4 (Question Generation) on lmqg/qg_squadshifts
    self-reported
    7.690
  • ROUGE-L (Question Generation) on lmqg/qg_squadshifts
    self-reported
    23.300
  • METEOR (Question Generation) on lmqg/qg_squadshifts
    self-reported
    23.290
  • BERTScore (Question Generation) on lmqg/qg_squadshifts
    self-reported
    92.260
  • MoverScore (Question Generation) on lmqg/qg_squadshifts
    self-reported
    63.630