mt5-small-itquad-qg / README.md
asahi417's picture
model update
a969ca9
|
raw
history blame
9.29 kB
metadata
license: cc-by-4.0
metrics:
  - bleu4
  - meteor
  - rouge-l
  - bertscore
  - moverscore
language: it
datasets:
  - lmqg/qg_itquad
pipeline_tag: text2text-generation
tags:
  - question generation
widget:
  - text: >-
      <hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per
      riflettere tale deprezzamento.
    example_title: Question Generation Example 1
  - text: >-
      L' individuazione del petrolio e lo sviluppo di nuovi giacimenti
      richiedeva in genere <hl> da cinque a dieci anni <hl> prima di una
      produzione significativa.
    example_title: Question Generation Example 2
  - text: il <hl> Giappone <hl> è stato il paese più dipendente dal petrolio arabo.
    example_title: Question Generation Example 3
model-index:
  - name: lmqg/mt5-small-itquad-qg
    results:
      - task:
          name: Text2text Generation
          type: text2text-generation
        dataset:
          name: lmqg/qg_itquad
          type: default
          args: default
        metrics:
          - name: BLEU4 (Question Generation)
            type: bleu4_question_generation
            value: 7.37
          - name: ROUGE-L (Question Generation)
            type: rouge_l_question_generation
            value: 21.93
          - name: METEOR (Question Generation)
            type: meteor_question_generation
            value: 17.57
          - name: BERTScore (Question Generation)
            type: bertscore_question_generation
            value: 80.8
          - name: MoverScore (Question Generation)
            type: moverscore_question_generation
            value: 56.79
          - name: BLEU4 (Question & Answer Generation)
            type: bleu4_question_answer_generation
            value: 15.44
          - name: ROUGE-L (Question & Answer Generation)
            type: rouge_l_question_answer_generation
            value: 40.08
          - name: METEOR (Question & Answer Generation)
            type: meteor_question_answer_generation
            value: 34.31
          - name: BERTScore (Question & Answer Generation)
            type: bertscore_question_answer_generation
            value: 86.62
          - name: MoverScore (Question & Answer Generation)
            type: moverscore_question_answer_generation
            value: 60.68
          - name: >-
              QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold
              Answer]
            type: >-
              qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer
            value: 87.66
          - name: >-
              QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold
              Answer]
            type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer
            value: 87.57
          - name: >-
              QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold
              Answer]
            type: >-
              qa_aligned_precision_bertscore_question_answer_generation_gold_answer
            value: 87.76
          - name: >-
              QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold
              Answer]
            type: >-
              qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer
            value: 61.6
          - name: >-
              QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold
              Answer]
            type: >-
              qa_aligned_recall_moverscore_question_answer_generation_gold_answer
            value: 61.48
          - name: >-
              QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold
              Answer]
            type: >-
              qa_aligned_precision_moverscore_question_answer_generation_gold_answer
            value: 61.73

Model Card of lmqg/mt5-small-itquad-qg

This model is fine-tuned version of google/mt5-small for question generation task on the lmqg/qg_itquad (dataset_name: default) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="it", model="lmqg/mt5-small-itquad-qg")

# model prediction
questions = model.generate_q(list_context="Dopo il 1971 , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.", list_answer="Dopo il 1971")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mt5-small-itquad-qg")
output = pipe("<hl> Dopo il 1971 <hl> , l' OPEC ha tardato ad adeguare i prezzi per riflettere tale deprezzamento.")

Evaluation

Score Type Dataset
BERTScore 80.8 default lmqg/qg_itquad
Bleu_1 22.78 default lmqg/qg_itquad
Bleu_2 14.93 default lmqg/qg_itquad
Bleu_3 10.34 default lmqg/qg_itquad
Bleu_4 7.37 default lmqg/qg_itquad
METEOR 17.57 default lmqg/qg_itquad
MoverScore 56.79 default lmqg/qg_itquad
ROUGE_L 21.93 default lmqg/qg_itquad
  • Metric (Question & Answer Generation): QAG metrics are computed with the gold answer and generated question on it for this model, as the model cannot provide an answer. raw metric file
Score Type Dataset
BERTScore 86.62 default lmqg/qg_itquad
Bleu_1 40.5 default lmqg/qg_itquad
Bleu_2 28.64 default lmqg/qg_itquad
Bleu_3 20.78 default lmqg/qg_itquad
Bleu_4 15.44 default lmqg/qg_itquad
METEOR 34.31 default lmqg/qg_itquad
MoverScore 60.68 default lmqg/qg_itquad
QAAlignedF1Score (BERTScore) 87.66 default lmqg/qg_itquad
QAAlignedF1Score (MoverScore) 61.6 default lmqg/qg_itquad
QAAlignedPrecision (BERTScore) 87.76 default lmqg/qg_itquad
QAAlignedPrecision (MoverScore) 61.73 default lmqg/qg_itquad
QAAlignedRecall (BERTScore) 87.57 default lmqg/qg_itquad
QAAlignedRecall (MoverScore) 61.48 default lmqg/qg_itquad
ROUGE_L 40.08 default lmqg/qg_itquad

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_itquad
  • dataset_name: default
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: None
  • model: google/mt5-small
  • max_length: 512
  • max_length_output: 32
  • epoch: 15
  • batch: 16
  • lr: 0.0005
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 4
  • label_smoothing: 0.0

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}