liputan6-pt-pl50
This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on the id_liputan6 canonical dataset. It achieves the following results on the evaluation set:
- Loss: 3.7533
- Rouge1: 19.8017
- Rouge2: 5.8239
- Rougel: 17.0737
- Rougelsum: 18.0279
- Gen Len: 30.789
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
4.7245 | 1.0 | 63 | 3.9912 | 16.8276 | 3.6927 | 14.367 | 15.3151 | 30.652 |
3.9104 | 2.0 | 126 | 3.8609 | 17.712 | 4.2061 | 14.9465 | 15.9818 | 35.104 |
3.6651 | 3.0 | 189 | 3.8036 | 18.8508 | 4.6943 | 15.8363 | 17.0134 | 30.749 |
3.4442 | 4.0 | 252 | 3.7533 | 19.7665 | 5.1425 | 16.7615 | 18.1456 | 28.31 |
3.2664 | 5.0 | 315 | 3.7381 | 19.5385 | 5.1106 | 16.7601 | 17.9271 | 29.142 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
Unable to determine this model's library. Check the
docs
.
Finetuned from
Dataset used to train apwic/liputan6-pt-pl50
Evaluation results
- Rouge1 on id_liputan6 canonicalvalidation set self-reported19.802