--- language: - id license: apache-2.0 base_model: LazarusNLP/IndoNanoT5-base tags: - generated_from_trainer datasets: - id_liputan6 metrics: - rouge model-index: - name: liputan6-pt-pl5 results: - task: name: Summarization type: summarization dataset: name: id_liputan6 canonical type: id_liputan6 config: canonical split: validation args: canonical metrics: - name: Rouge1 type: rouge value: 18.3412 --- # liputan6-pt-pl5 This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/IndoNanoT5-base) on the id_liputan6 canonical dataset. It achieves the following results on the evaluation set: - Loss: 3.8205 - Rouge1: 18.3412 - Rouge2: 4.7361 - Rougel: 15.5136 - Rougelsum: 16.6913 - Gen Len: 35.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 4.799 | 1.0 | 63 | 4.1142 | 13.0788 | 2.2394 | 10.8409 | 11.8062 | 40.873 | | 4.179 | 2.0 | 126 | 3.9928 | 16.7604 | 3.2541 | 13.8889 | 15.1654 | 32.962 | | 3.9656 | 3.0 | 189 | 3.8832 | 18.1366 | 3.9918 | 15.2392 | 16.4266 | 30.549 | | 3.8038 | 4.0 | 252 | 3.8552 | 18.2504 | 4.0948 | 15.4777 | 16.7374 | 28.411 | | 3.6617 | 5.0 | 315 | 3.8205 | 18.6328 | 4.2703 | 15.6625 | 16.9103 | 30.177 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1