Edit model card

Model Card of lmqg/mbart-large-cc25-squad-qg

This model is fine-tuned version of facebook/mbart-large-cc25 for question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="lmqg/mbart-large-cc25-squad-qg")

# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

Evaluation

Score Type Dataset
BERTScore 90.36 default lmqg/qg_squad
Bleu_1 56 default lmqg/qg_squad
Bleu_2 39.41 default lmqg/qg_squad
Bleu_3 29.76 default lmqg/qg_squad
Bleu_4 23.03 default lmqg/qg_squad
METEOR 25.1 default lmqg/qg_squad
MoverScore 63.63 default lmqg/qg_squad
ROUGE_L 50.58 default lmqg/qg_squad
  • Metrics (Question Generation, Out-of-Domain)
Dataset Type BERTScore Bleu_4 METEOR MoverScore ROUGE_L Link
lmqg/qg_dequad default 11.05 0.0 1.05 44.94 3.4 link
lmqg/qg_esquad default 60.73 0.57 5.27 48.76 18.99 link
lmqg/qg_frquad default 16.47 0.02 1.55 45.35 5.13 link
lmqg/qg_itquad default 41.46 0.48 3.84 47.28 13.25 link
lmqg/qg_jaquad default 19.89 0.06 1.74 45.51 6.11 link
lmqg/qg_koquad default 31.67 0.38 3.06 46.59 10.34 link
lmqg/qg_ruquad default 26.19 0.18 2.65 46.09 8.34 link

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_squad
  • dataset_name: default
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: None
  • model: facebook/mbart-large-cc25
  • max_length: 512
  • max_length_output: 32
  • epoch: 6
  • batch: 32
  • lr: 0.0001
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 2
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
10

Dataset used to train research-backup/mbart-large-cc25-squad-qg

Evaluation results