indosum-base-0

This model is a fine-tuned version of LazarusNLP/IndoNanoT5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7170
  • Rouge1: 72.8591
  • Rouge2: 65.7981
  • Rougel: 69.8759
  • Rougelsum: 72.0557
  • Gen Len: 99.276

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.2132 1.0 892 0.7742 67.4414 59.7409 64.517 66.4918 94.092
0.686 2.0 1784 0.6673 70.2138 62.8202 67.1553 69.3063 100.2933
0.491 3.0 2676 0.6274 71.2142 63.9943 68.2722 70.2971 100.944
0.343 4.0 3568 0.6469 71.7114 64.489 68.7214 70.7949 98.8227
0.2059 5.0 4460 0.7170 72.5364 65.2519 69.5637 71.6884 98.9053

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
13
Safetensors
Model size
248M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for apwic/indosum-base-0

Finetuned
(53)
this model