bart-base-cnn-YT-transcript-sum

This model is a fine-tuned version of ainize/bart-base-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4969
  • Rouge1: 27.1516
  • Rouge2: 14.6227
  • Rougel: 23.3968
  • Rougelsum: 25.4786
  • Gen Len: 19.9954

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 216 1.5374 24.7307 11.5124 20.6823 22.9189 19.9630
No log 2.0 432 1.4976 26.825 14.0512 23.2078 25.2044 19.9583
1.5449 3.0 648 1.4969 27.1516 14.6227 23.3968 25.4786 19.9954
1.5449 4.0 864 1.5345 27.2526 15.0873 23.8556 25.7798 19.9861
0.9 5.0 1080 1.5962 26.8267 14.7267 23.2263 25.2149 19.9676
0.9 6.0 1296 1.6378 26.8444 14.8753 23.254 25.2943 19.9815
0.5749 7.0 1512 1.6819 27.1776 14.898 23.2454 25.4298 19.9583
0.5749 8.0 1728 1.7360 26.9518 15.308 23.6574 25.2991 19.9769
0.5749 9.0 1944 1.7796 27.9253 15.7998 24.4827 26.4424 19.9769
0.3668 10.0 2160 1.8078 26.9211 15.0903 23.4484 25.4369 19.9815
0.3668 11.0 2376 1.8405 27.4434 15.3608 23.903 25.8117 19.9861
0.255 12.0 2592 1.8447 27.7175 15.7173 24.2096 26.0946 19.9815
0.255 13.0 2808 1.8834 27.2409 15.3865 23.7314 25.7682 19.9815
0.192 14.0 3024 1.8796 27.2939 15.5502 23.8294 25.7409 19.9815
0.192 15.0 3240 1.8851 27.6741 15.771 24.1976 26.1196 19.9722

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for anuragrawal/bart-base-cnn-YT-transcript-sum

Finetuned
(3)
this model