tanatapanun's picture
Model save
de26365 verified
---
base_model: bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-bart-20-epochs-1024-input-128-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-bart-20-epochs-1024-input-128-output
This model is a fine-tuned version of [bart-base](https://huggingface.co/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6312
- Rouge1: 0.1575
- Rouge2: 0.0297
- Rougel: 0.1269
- Rougelsum: 0.1263
- Gen Len: 32.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.3450 | 0.0044 | 0.0 | 0.0045 | 0.0045 | 8.06 |
| No log | 2.0 | 302 | 1.9546 | 0.1167 | 0.0307 | 0.1002 | 0.1003 | 25.59 |
| No log | 3.0 | 453 | 1.6769 | 0.0789 | 0.0193 | 0.065 | 0.0642 | 14.44 |
| 4.4533 | 4.0 | 604 | 1.5784 | 0.13 | 0.0304 | 0.097 | 0.0976 | 33.11 |
| 4.4533 | 5.0 | 755 | 1.5294 | 0.1659 | 0.0337 | 0.1289 | 0.129 | 44.89 |
| 4.4533 | 6.0 | 906 | 1.5051 | 0.1459 | 0.0332 | 0.1048 | 0.1041 | 47.13 |
| 1.1908 | 7.0 | 1057 | 1.4893 | 0.1495 | 0.0376 | 0.1111 | 0.1101 | 45.77 |
| 1.1908 | 8.0 | 1208 | 1.4917 | 0.135 | 0.0317 | 0.1049 | 0.1046 | 28.13 |
| 1.1908 | 9.0 | 1359 | 1.5029 | 0.1498 | 0.0293 | 0.1231 | 0.1218 | 31.36 |
| 0.7941 | 10.0 | 1510 | 1.5114 | 0.175 | 0.0401 | 0.1327 | 0.1314 | 37.84 |
| 0.7941 | 11.0 | 1661 | 1.5400 | 0.1513 | 0.0358 | 0.1242 | 0.1231 | 29.32 |
| 0.7941 | 12.0 | 1812 | 1.5343 | 0.1579 | 0.0333 | 0.1207 | 0.1185 | 34.84 |
| 0.7941 | 13.0 | 1963 | 1.5620 | 0.1534 | 0.0347 | 0.1245 | 0.124 | 30.83 |
| 0.5288 | 14.0 | 2114 | 1.5621 | 0.1441 | 0.0277 | 0.1138 | 0.1134 | 31.1 |
| 0.5288 | 15.0 | 2265 | 1.5808 | 0.152 | 0.0259 | 0.1212 | 0.1208 | 34.51 |
| 0.5288 | 16.0 | 2416 | 1.6036 | 0.1657 | 0.0336 | 0.1349 | 0.1346 | 35.18 |
| 0.3635 | 17.0 | 2567 | 1.6136 | 0.1523 | 0.0307 | 0.126 | 0.1254 | 30.67 |
| 0.3635 | 18.0 | 2718 | 1.6192 | 0.1525 | 0.0308 | 0.1227 | 0.1227 | 33.54 |
| 0.3635 | 19.0 | 2869 | 1.6324 | 0.1478 | 0.0303 | 0.1193 | 0.1189 | 32.47 |
| 0.2801 | 20.0 | 3020 | 1.6312 | 0.1575 | 0.0297 | 0.1269 | 0.1263 | 32.75 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.1