Edit model card

Model Card of lmqg/mt5-base-esquad-qg

This model is fine-tuned version of google/mt5-base for question generation task on the lmqg/qg_esquad (dataset_name: default) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="es", model="lmqg/mt5-base-esquad-qg")

# model prediction
questions = model.generate_q(list_context="a noviembre , que es también la estación lluviosa.", list_answer="noviembre")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mt5-base-esquad-qg")
output = pipe("del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.")

Evaluation

Score Type Dataset
BERTScore 84.47 default lmqg/qg_esquad
Bleu_1 26.73 default lmqg/qg_esquad
Bleu_2 18.46 default lmqg/qg_esquad
Bleu_3 13.5 default lmqg/qg_esquad
Bleu_4 10.15 default lmqg/qg_esquad
METEOR 23.43 default lmqg/qg_esquad
MoverScore 59.62 default lmqg/qg_esquad
ROUGE_L 25.45 default lmqg/qg_esquad
  • Metric (Question & Answer Generation, Reference Answer): Each question is generated from the gold answer. raw metric file
Score Type Dataset
QAAlignedF1Score (BERTScore) 89.68 default lmqg/qg_esquad
QAAlignedF1Score (MoverScore) 64.22 default lmqg/qg_esquad
QAAlignedPrecision (BERTScore) 89.7 default lmqg/qg_esquad
QAAlignedPrecision (MoverScore) 64.24 default lmqg/qg_esquad
QAAlignedRecall (BERTScore) 89.66 default lmqg/qg_esquad
QAAlignedRecall (MoverScore) 64.21 default lmqg/qg_esquad
Score Type Dataset
QAAlignedF1Score (BERTScore) 80.79 default lmqg/qg_esquad
QAAlignedF1Score (MoverScore) 55.25 default lmqg/qg_esquad
QAAlignedPrecision (BERTScore) 78.45 default lmqg/qg_esquad
QAAlignedPrecision (MoverScore) 53.7 default lmqg/qg_esquad
QAAlignedRecall (BERTScore) 83.34 default lmqg/qg_esquad
QAAlignedRecall (MoverScore) 56.99 default lmqg/qg_esquad

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_esquad
  • dataset_name: default
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: None
  • model: google/mt5-base
  • max_length: 512
  • max_length_output: 32
  • epoch: 10
  • batch: 4
  • lr: 0.0005
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 16
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
9

Dataset used to train lmqg/mt5-base-esquad-qg

Evaluation results

  • BLEU4 (Question Generation) on lmqg/qg_esquad
    self-reported
    10.150
  • ROUGE-L (Question Generation) on lmqg/qg_esquad
    self-reported
    25.450
  • METEOR (Question Generation) on lmqg/qg_esquad
    self-reported
    23.430
  • BERTScore (Question Generation) on lmqg/qg_esquad
    self-reported
    84.470
  • MoverScore (Question Generation) on lmqg/qg_esquad
    self-reported
    59.620
  • QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_esquad
    self-reported
    89.680
  • QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_esquad
    self-reported
    89.660
  • QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_esquad
    self-reported
    89.700
  • QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_esquad
    self-reported
    64.220
  • QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_esquad
    self-reported
    64.210
  • QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_esquad
    self-reported
    64.240
  • QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] on lmqg/qg_esquad
    self-reported
    80.790
  • QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] on lmqg/qg_esquad
    self-reported
    83.340
  • QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] on lmqg/qg_esquad
    self-reported
    78.450
  • QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] on lmqg/qg_esquad
    self-reported
    55.250
  • QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] on lmqg/qg_esquad
    self-reported
    56.990
  • QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] on lmqg/qg_esquad
    self-reported
    53.700