fine-tuned-BioBART-20-epochs-1024-input-256-output
This model is a fine-tuned version of GanjinZero/biobart-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.9316
- Rouge1: 0.1523
- Rouge2: 0.0383
- Rougel: 0.1238
- Rougelsum: 0.1231
- Gen Len: 33.48
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 151 | 6.1537 | 0.0448 | 0.0 | 0.0437 | 0.0438 | 20.24 |
No log | 2.0 | 302 | 1.1404 | 0.104 | 0.0283 | 0.0891 | 0.0893 | 18.58 |
No log | 3.0 | 453 | 0.9725 | 0.0396 | 0.007 | 0.033 | 0.0325 | 15.0 |
4.0322 | 4.0 | 604 | 0.9153 | 0.1266 | 0.027 | 0.0985 | 0.0989 | 28.58 |
4.0322 | 5.0 | 755 | 0.8836 | 0.1575 | 0.0321 | 0.1256 | 0.1258 | 31.7 |
4.0322 | 6.0 | 906 | 0.8710 | 0.1505 | 0.0314 | 0.1184 | 0.1189 | 37.41 |
0.7605 | 7.0 | 1057 | 0.8578 | 0.1511 | 0.0362 | 0.1109 | 0.111 | 45.66 |
0.7605 | 8.0 | 1208 | 0.8546 | 0.1722 | 0.0358 | 0.1318 | 0.1315 | 34.14 |
0.7605 | 9.0 | 1359 | 0.8584 | 0.1493 | 0.0288 | 0.1125 | 0.1125 | 26.25 |
0.5251 | 10.0 | 1510 | 0.8631 | 0.1705 | 0.0407 | 0.1322 | 0.1322 | 35.71 |
0.5251 | 11.0 | 1661 | 0.8690 | 0.1856 | 0.0364 | 0.1498 | 0.15 | 28.69 |
0.5251 | 12.0 | 1812 | 0.8763 | 0.1995 | 0.0362 | 0.1555 | 0.1564 | 39.6 |
0.5251 | 13.0 | 1963 | 0.8928 | 0.1727 | 0.0349 | 0.1376 | 0.1378 | 30.79 |
0.3673 | 14.0 | 2114 | 0.8967 | 0.1578 | 0.0297 | 0.1209 | 0.1205 | 34.95 |
0.3673 | 15.0 | 2265 | 0.9073 | 0.1604 | 0.0363 | 0.1256 | 0.1246 | 33.75 |
0.3673 | 16.0 | 2416 | 0.9155 | 0.1627 | 0.035 | 0.1326 | 0.1321 | 35.75 |
0.2634 | 17.0 | 2567 | 0.9227 | 0.164 | 0.0406 | 0.1346 | 0.136 | 34.14 |
0.2634 | 18.0 | 2718 | 0.9270 | 0.1483 | 0.0365 | 0.1201 | 0.1187 | 32.11 |
0.2634 | 19.0 | 2869 | 0.9291 | 0.1569 | 0.0365 | 0.1249 | 0.1246 | 35.13 |
0.2133 | 20.0 | 3020 | 0.9316 | 0.1523 | 0.0383 | 0.1238 | 0.1231 | 33.48 |
Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 90
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for tanatapanun/fine-tuned-BioBART-20-epochs-1024-input-256-output
Base model
GanjinZero/biobart-base