Edit model card

squad-bn-qgen-mt5-all-metric

This model is a fine-tuned version of google/mt5-small on the squad_bn dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7273
  • Rouge1 Precision: 35.8589
  • Rouge1 Recall: 29.7041
  • Rouge1 Fmeasure: 31.6373
  • Rouge2 Precision: 15.4203
  • Rouge2 Recall: 12.5155
  • Rouge2 Fmeasure: 13.3978
  • Rougel Precision: 34.4684
  • Rougel Recall: 28.5887
  • Rougel Fmeasure: 30.4627
  • Rougelsum Precision: 34.4252
  • Rougelsum Recall: 28.5362
  • Rougelsum Fmeasure: 30.4053
  • Sacrebleu: 6.4143
  • Meteor: 0.1416
  • Gen Len: 16.7199

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Precision Rouge1 Recall Rouge1 Fmeasure Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure Rougel Precision Rougel Recall Rougel Fmeasure Rougelsum Precision Rougelsum Recall Rougelsum Fmeasure Sacrebleu Meteor Gen Len
0.8449 1.0 16396 0.7340 31.6476 26.8901 28.2871 13.621 11.3545 11.958 30.3276 25.7754 27.1048 30.3426 25.7489 27.0991 5.9655 0.1336 16.8685
0.7607 2.0 32792 0.7182 33.7173 28.6115 30.1049 14.8227 12.2059 12.9453 32.149 27.2036 28.6617 32.2479 27.2261 28.7272 6.6093 0.138 16.8522
0.7422 3.0 49188 0.7083 34.6128 29.0223 30.7248 14.9888 12.3092 13.1021 33.2507 27.8154 29.4599 33.2848 27.812 29.5064 6.2407 0.1416 16.5806
0.705 4.0 65584 0.7035 34.156 29.0012 30.546 14.72 12.0251 12.8161 32.7527 27.6511 29.1955 32.7692 27.6627 29.231 6.1784 0.1393 16.7793
0.6859 5.0 81980 0.7038 35.1405 29.6033 31.2614 15.5108 12.6414 13.5059 33.8335 28.4264 30.0745 33.8782 28.4349 30.0901 6.5896 0.144 16.6651

Framework versions

  • Transformers 4.20.1
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
1

Evaluation results