Edit model card

Model Card of research-backup/t5-small-squadshifts-vanilla-amazon-qg

This model is fine-tuned version of t5-small for question generation task on the lmqg/qg_squadshifts (dataset_name: amazon) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-squadshifts-vanilla-amazon-qg")

# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "research-backup/t5-small-squadshifts-vanilla-amazon-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

Evaluation

Score Type Dataset
BERTScore 81.77 amazon lmqg/qg_squadshifts
Bleu_1 4.56 amazon lmqg/qg_squadshifts
Bleu_2 1.45 amazon lmqg/qg_squadshifts
Bleu_3 0.6 amazon lmqg/qg_squadshifts
Bleu_4 0.3 amazon lmqg/qg_squadshifts
METEOR 5.27 amazon lmqg/qg_squadshifts
MoverScore 50.5 amazon lmqg/qg_squadshifts
ROUGE_L 5.59 amazon lmqg/qg_squadshifts

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_squadshifts
  • dataset_name: amazon
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: ['qg']
  • model: t5-small
  • max_length: 512
  • max_length_output: 32
  • epoch: 1
  • batch: 32
  • lr: 1e-05
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 4
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
5

Dataset used to train research-backup/t5-small-squadshifts-vanilla-amazon-qg

Evaluation results

  • BLEU4 (Question Generation) on lmqg/qg_squadshifts
    self-reported
    0.300
  • ROUGE-L (Question Generation) on lmqg/qg_squadshifts
    self-reported
    5.590
  • METEOR (Question Generation) on lmqg/qg_squadshifts
    self-reported
    5.270
  • BERTScore (Question Generation) on lmqg/qg_squadshifts
    self-reported
    81.770
  • MoverScore (Question Generation) on lmqg/qg_squadshifts
    self-reported
    50.500