license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_squad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: >-
<hl> Beyonce further expanded her acting career, starring as blues singer
Etta James in the 2008 musical biopic, Cadillac Records. <hl>
example_title: Question Generation Example 1
- text: >-
<hl> Beyonce further expanded her acting career, starring as blues singer
Etta James in the 2008 musical biopic, Cadillac Records. <hl>
example_title: Question Generation Example 2
- text: >-
<hl> Beyonce further expanded her acting career, starring as blues singer
Etta James in the 2008 musical biopic, Cadillac Records . <hl>
example_title: Question Generation Example 3
model-index:
- name: lmqg/bart-base-squad-no-answer
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_squad
type: default
args: default
metrics:
- name: BLEU4
type: bleu4
value: 0.2196802868921128
- name: ROUGE-L
type: rouge-l
value: 0.49695872760015636
- name: METEOR
type: meteor
value: 0.23715466422245665
- name: BERTScore
type: bertscore
value: 0.9037814976684458
- name: MoverScore
type: moverscore
value: 0.6307014769529157
Model Card of lmqg/bart-base-squad-no-answer
This model is fine-tuned version of facebook/bart-base for question generation task on the
lmqg/qg_squad (dataset_name: default) via lmqg
.
This model is fine-tuned without answer information, i.e. generate a question only given a paragraph (note that normal model is fine-tuned to generate a question given a pargraph and an associated answer in the paragraph).
Please cite our paper if you use the model (TBA).
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
Overview
- Language model: facebook/bart-base
- Language: en
- Training data: lmqg/qg_squad (default)
- Online Demo: https://autoqg.net/
- Repository: https://github.com/asahi417/lm-question-generation
- Paper: TBA
Usage
from transformers import pipeline
model_path = 'lmqg/bart-base-squad-no-answer'
pipe = pipeline("text2text-generation", model_path)
# Question Generation
question = pipe('<hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl>')
Evaluation Metrics
Metrics
Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
---|---|---|---|---|---|---|---|
lmqg/qg_squad | default | 0.22 | 0.497 | 0.237 | 0.904 | 0.631 | link |
Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 4
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at fine-tuning config file.
Citation
@inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", }