Edit model card

Model Card of lmqg/mt5-small-squad-qg

This model is fine-tuned version of google/mt5-small for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="lmqg/mt5-small-squad-qg")

# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mt5-small-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

Evaluation

Score Type Dataset
BERTScore 90.01 default lmqg/qg_squad
Bleu_1 54.07 default lmqg/qg_squad
Bleu_2 37.62 default lmqg/qg_squad
Bleu_3 28.18 default lmqg/qg_squad
Bleu_4 21.65 default lmqg/qg_squad
METEOR 23.83 default lmqg/qg_squad
MoverScore 62.75 default lmqg/qg_squad
ROUGE_L 48.95 default lmqg/qg_squad
  • Metrics (Question Generation, Out-of-Domain)
Dataset Type BERTScore Bleu_4 METEOR MoverScore ROUGE_L Link
lmqg/qg_dequad default 73.53 0.0 4.81 50.37 1.56 link
lmqg/qg_esquad default 74.94 0.59 6.02 50.62 5.21 link
lmqg/qg_frquad default 72.91 1.71 8.24 50.96 15.84 link
lmqg/qg_itquad default 72.6 0.54 5.89 50.23 5.01 link
lmqg/qg_jaquad default 66.08 0.0 0.51 46.53 6.08 link
lmqg/qg_koquad default 66.34 0.0 0.73 45.86 0.06 link
lmqg/qg_ruquad default 70.89 0.0 1.78 49.1 0.99 link

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_squad
  • dataset_name: default
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: None
  • model: google/mt5-small
  • max_length: 512
  • max_length_output: 32
  • epoch: 15
  • batch: 64
  • lr: 0.0005
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 1
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
2

Dataset used to train lmqg/mt5-small-squad-qg

Evaluation results