Edit model card

flan-t5-small-botco_QA-finetuned-question-generation-context-only

This model is a fine-tuned version of google/flan-t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0952
  • Rouge1: 83.429
  • Rouge2: 80.1583
  • Rougel: 80.2037
  • Rougelsum: 83.2421
  • Bleu-4: 62.4017
  • Meteor: 87.7448
  • Gen Len: 45.3499

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bleu-4 Meteor Gen Len
2.6539 2.33 50 2.2360 82.6627 79.5849 79.46 82.5244 57.5926 86.8675 49.0233
2.212 4.66 100 2.0952 83.429 80.1583 80.2037 83.2421 62.4017 87.7448 45.3499

Framework versions

  • Transformers 4.27.2
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
2