Edit model card

Language Models Fine-tuning on Question Generation: lmqg/bart-base-squad

This model is fine-tuned version of facebook/bart-base for question generation task on the lmqg/qg_squad (dataset_name: default).

Overview

Usage


from transformers import pipeline

model_path = 'lmqg/bart-base-squad'
pipe = pipeline("text2text-generation", model_path)

# Question Generation
input_text = 'generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
question = pipe(input_text)

Evaluation Metrics

Metrics

Dataset Type BLEU4 ROUGE-L METEOR BERTScore MoverScore Link
lmqg/qg_squad default 0.246842016024829 0.5265935194632172 0.26054388074278156 0.9087148593157368 0.6447365106624863 link

Out-of-domain Metrics

Dataset Type BLEU4 ROUGE-L METEOR BERTScore MoverScore Link
lmqg/qg_squadshifts reddit 0.053789810023704955 0.2141155595451475 0.20395821936787215 0.905714302466044 0.6013927660089013 link
lmqg/qg_squadshifts new_wiki 0.10732253983426589 0.2843539251435107 0.26233713078026283 0.9307303692241476 0.656720781293701 link
lmqg/qg_subjqa tripadvisor 0.010174680918435602 0.1341425139885307 0.1391725168440533 0.8877592491739579 0.5590591813016728 link
lmqg/qg_squadshifts default 0.07288015620049493 0.2416012713767735 0.23036946160178162 0.9153993051135918 0.6225373310086992 link
lmqg/qg_squadshifts nyt 0.07645313983751752 0.2390325229516282 0.244330483594333 0.9235989114144583 0.6368628469746445 link
lmqg/qg_subjqa restaurants 1.7873892359263582e-10 0.12160976589996819 0.1146979295288459 0.8771339668070569 0.5490739019998478 link
lmqg/qg_subjqa electronics 1.3766381900873328e-06 0.14287460464803423 0.14866637711177003 0.8759880110997111 0.5607199201429516 link
lmqg/qg_subjqa books 1.4952813458186383e-10 0.10769136267285535 0.11520101781020654 0.8774975922095214 0.5520873074919223 link
lmqg/qg_subjqa movies 0.0108258720771249 0.1389815289507374 0.12855849168399078 0.8773110466344016 0.5555164603510797 link
lmqg/qg_subjqa grocery 0.006003840641121225 0.1248840598199836 0.1553374628831024 0.8737966828346252 0.5662545638649026 link
lmqg/qg_squadshifts amazon 0.05824165264328302 0.23816054441894524 0.2126541577267873 0.9049284884636415 0.6026811246610306 link
lmqg/qg_subjqa default 0.007260587205400462 0.12916262288335115 0.13825504134536976 0.8789821396999578 0.5589639015092911 link

Training hyperparameters

The following hyperparameters were used during fine-tuning:

  • dataset_path: lmqg/qg_squad
  • dataset_name: default
  • input_types: ['paragraph_answer']
  • output_types: ['question']
  • prefix_types: None
  • model: facebook/bart-base
  • max_length: 512
  • max_length_output: 32
  • epoch: 7
  • batch: 32
  • lr: 0.0001
  • fp16: False
  • random_seed: 1
  • gradient_accumulation_steps: 8
  • label_smoothing: 0.15

The full configuration can be found at fine-tuning config file.

Citation

TBA

Downloads last month
51
Hosted inference API
Text2Text Generation
Examples
Examples
This model can be loaded on the Inference API on-demand.

Dataset used to train lmqg/bart-base-squad

Evaluation results