Edit model card

Model Card of lmqg/mt5-base-zhquad-qg-ae

This model is fine-tuned version of google/mt5-base for question generation and answer extraction jointly on the lmqg/qg_zhquad (dataset_name: default) via lmqg.

Overview

Usage

from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="zh", model="lmqg/mt5-base-zhquad-qg-ae")

# model prediction
question_answer_pairs = model.generate_qa("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
  • With transformers
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/mt5-base-zhquad-qg-ae")

# answer extraction
answer = pipe("generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")

# question generation
question = pipe("extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")

Evaluation

Score Type Dataset
BERTScore 76.82 default lmqg/qg_zhquad
Bleu_1 36.9 default lmqg/qg_zhquad
Bleu_2 25.74 default lmqg/qg_zhquad
Bleu_3 19.13 default lmqg/qg_zhquad
Bleu_4 14.63 default lmqg/qg_zhquad
METEOR 23.69 default lmqg/qg_zhquad
MoverScore 57.24 default lmqg/qg_zhquad
ROUGE_L 34.07 default lmqg/qg_zhquad
Score Type Dataset
QAAlignedF1Score (BERTScore) 78.4 default lmqg/qg_zhquad
QAAlignedF1Score (MoverScore) 53.55 default lmqg/qg_zhquad
QAAlignedPrecision (BERTScore) 75.27 default lmqg/qg_zhquad
QAAlignedPrecision (MoverScore) 51.56 default lmqg/qg_zhquad
QAAlignedRecall (BERTScore) 81.92 default lmqg/qg_zhquad
QAAlignedRecall (MoverScore) 55.82 default lmqg/qg_zhquad
Score Type Dataset
AnswerExactMatch 95.07 default lmqg/qg_zhquad
AnswerF1Score 95.15 default lmqg/qg_zhquad
BERTScore 99.76 default lmqg/qg_zhquad
Bleu_1 92.37 default lmqg/qg_zhquad
Bleu_2 89.37 default lmqg/qg_zhquad
Bleu_3 86.14 default lmqg/qg_zhquad
Bleu_4 82.63 default lmqg/qg_zhquad
METEOR 71.18 default lmqg/qg_zhquad
MoverScore 98.8 default lmqg/qg_zhquad
ROUGE_L 95.72 default lmqg/qg_zhquad

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_zhquad
  • dataset_name: default
  • input_types: ['paragraph_answer', 'paragraph_sentence']
  • output_types: ['question', 'answer']
  • prefix_types: ['qg', 'ae']
  • model: google/mt5-base
  • max_length: 512
  • max_length_output: 32
  • epoch: 5
  • batch: 32
  • lr: 0.0005
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 2
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
17

Dataset used to train lmqg/mt5-base-zhquad-qg-ae

Evaluation results

  • BLEU4 (Question Generation) on lmqg/qg_zhquad
    self-reported
    14.630
  • ROUGE-L (Question Generation) on lmqg/qg_zhquad
    self-reported
    34.070
  • METEOR (Question Generation) on lmqg/qg_zhquad
    self-reported
    23.690
  • BERTScore (Question Generation) on lmqg/qg_zhquad
    self-reported
    76.820
  • MoverScore (Question Generation) on lmqg/qg_zhquad
    self-reported
    57.240
  • QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) on lmqg/qg_zhquad
    self-reported
    78.400
  • QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) on lmqg/qg_zhquad
    self-reported
    81.920
  • QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) on lmqg/qg_zhquad
    self-reported
    75.270
  • QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) on lmqg/qg_zhquad
    self-reported
    53.550
  • QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) on lmqg/qg_zhquad
    self-reported
    55.820
  • QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) on lmqg/qg_zhquad
    self-reported
    51.560
  • BLEU4 (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    82.630
  • ROUGE-L (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    95.720
  • METEOR (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    71.180
  • BERTScore (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    99.760
  • MoverScore (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    98.800
  • AnswerF1Score (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    95.150
  • AnswerExactMatch (Answer Extraction) on lmqg/qg_zhquad
    self-reported
    95.070