Edit model card

fine-tuned-bart-2084-30-epochs

This model is a fine-tuned version of facebook/bart-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8935
  • Rouge1: 0.3436
  • Rouge2: 0.1382
  • Rougel: 0.3044
  • Rougelsum: 0.3016
  • Gen Len: 15.33

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 301 0.7890 0.2453 0.0875 0.2145 0.2161 14.83
1.8096 2.0 602 0.7325 0.2259 0.0793 0.1953 0.1953 13.6
1.8096 3.0 903 0.7239 0.2872 0.0985 0.2567 0.2559 14.63
0.6874 4.0 1204 0.7236 0.3302 0.1232 0.2922 0.2916 13.89
0.5882 5.0 1505 0.7257 0.3129 0.1167 0.2778 0.2775 14.95
0.5882 6.0 1806 0.7218 0.3269 0.1251 0.2957 0.2946 15.14
0.4982 7.0 2107 0.7403 0.3029 0.1114 0.2708 0.27 14.94
0.4982 8.0 2408 0.7417 0.3113 0.12 0.2762 0.2756 14.17
0.4299 9.0 2709 0.7470 0.3164 0.1274 0.2853 0.283 14.42
0.3815 10.0 3010 0.7505 0.3294 0.134 0.2919 0.2902 15.29
0.3815 11.0 3311 0.7725 0.3288 0.1285 0.2904 0.2908 15.26
0.3421 12.0 3612 0.7864 0.3383 0.1298 0.3055 0.3043 15.0
0.3421 13.0 3913 0.7975 0.3225 0.1219 0.2864 0.2845 15.15
0.2989 14.0 4214 0.8120 0.3326 0.1344 0.2918 0.2907 15.17
0.2652 15.0 4515 0.8128 0.3226 0.1154 0.2942 0.2934 15.0
0.2652 16.0 4816 0.8265 0.3201 0.1154 0.2845 0.2833 15.29
0.2382 17.0 5117 0.8325 0.3251 0.1265 0.2929 0.2905 15.37
0.2382 18.0 5418 0.8375 0.3348 0.1218 0.3013 0.299 15.14
0.2149 19.0 5719 0.8543 0.3373 0.1278 0.2991 0.2969 15.19
0.1956 20.0 6020 0.8638 0.3386 0.139 0.304 0.302 15.24
0.1956 21.0 6321 0.8659 0.3244 0.1253 0.2868 0.2857 15.23
0.1821 22.0 6622 0.8754 0.3325 0.1258 0.2967 0.2956 15.46
0.1821 23.0 6923 0.8775 0.3389 0.1288 0.3075 0.3062 15.22
0.164 24.0 7224 0.8779 0.3488 0.1331 0.3116 0.3105 15.56
0.159 25.0 7525 0.8839 0.3455 0.1409 0.3065 0.3039 15.44
0.159 26.0 7826 0.8885 0.3399 0.1353 0.3049 0.3022 15.37
0.1458 27.0 8127 0.8869 0.331 0.1309 0.2922 0.2901 15.36
0.1458 28.0 8428 0.8918 0.3388 0.138 0.3062 0.3031 15.34
0.1442 29.0 8729 0.8925 0.3384 0.138 0.3033 0.3006 15.23
0.1369 30.0 9030 0.8935 0.3436 0.1382 0.3044 0.3016 15.33

Framework versions

  • Transformers 4.36.2
  • Pytorch 1.12.1+cu113
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
3
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tanatapanun/fine-tuned-bart-2048-30-epochs

Base model

facebook/bart-base
Finetuned
(355)
this model