Edit model card

Model Card of lmqg/mbart-large-cc25-esquad-qag

This model is fine-tuned version of facebook/mbart-large-cc25 for question & answer pair generation task on the lmqg/qag_esquad (dataset_name: default) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="es", model="lmqg/mbart-large-cc25-esquad-qag")

# model prediction
question_answer_pairs = model.generate_qa("a noviembre , que es también la estación lluviosa.")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-esquad-qag")
output = pipe("del Ministerio de Desarrollo Urbano , Gobierno de la India.")

Evaluation

Score Type Dataset
QAAlignedF1Score (BERTScore) 78.8 default lmqg/qag_esquad
QAAlignedF1Score (MoverScore) 54 default lmqg/qag_esquad
QAAlignedPrecision (BERTScore) 76.59 default lmqg/qag_esquad
QAAlignedPrecision (MoverScore) 52.57 default lmqg/qag_esquad
QAAlignedRecall (BERTScore) 81.21 default lmqg/qag_esquad
QAAlignedRecall (MoverScore) 55.63 default lmqg/qag_esquad

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qag_esquad
  • dataset_name: default
  • input_types: ['paragraph']
  • output_types: ['questions_answers']
  • prefix_types: None
  • model: facebook/mbart-large-cc25
  • max_length: 512
  • max_length_output: 256
  • epoch: 6
  • batch: 8
  • lr: 0.0001
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 8
  • label_smoothing: 0.0

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train research-backup/mbart-large-cc25-esquad-qag

Evaluation results

  • QAAlignedF1Score-BERTScore (Question & Answer Generation) on lmqg/qag_esquad
    self-reported
    78.800
  • QAAlignedRecall-BERTScore (Question & Answer Generation) on lmqg/qag_esquad
    self-reported
    81.210
  • QAAlignedPrecision-BERTScore (Question & Answer Generation) on lmqg/qag_esquad
    self-reported
    76.590
  • QAAlignedF1Score-MoverScore (Question & Answer Generation) on lmqg/qag_esquad
    self-reported
    54.000
  • QAAlignedRecall-MoverScore (Question & Answer Generation) on lmqg/qag_esquad
    self-reported
    55.630
  • QAAlignedPrecision-MoverScore (Question & Answer Generation) on lmqg/qag_esquad
    self-reported
    52.570