bart_samsum_finetuned_for_asr

This model is a fine-tuned version of facebook/bart-large-xsum on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4623
  • Rouge1: 54.4349
  • Rouge2: 29.4619
  • Rougel: 44.7701
  • Rougelsum: 50.2825
  • Gen Len: 30.2751

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.3855 0.9997 1841 1.5274 52.3133 27.7547 43.0233 48.3891 30.3004
1.0882 2.0 3683 1.4969 53.2457 28.4129 44.1196 49.1579 30.2637
0.8376 2.9997 5524 1.5882 52.6929 27.5974 43.3339 47.9779 30.8034
0.6756 4.0 7366 1.6617 52.5167 27.1278 43.037 48.1382 30.5299
0.5417 4.9986 9205 1.8083 52.0696 26.8054 42.6108 47.5455 30.2894

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
66
Safetensors
Model size
406M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for 404sau404/bart_samsum_finetuned_for_asr

Finetuned
(50)
this model