Edit model card

indobart-combined-jv-id

This model is a fine-tuned version of indobenchmark/indobart-v2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6285
  • Bleu: 16.9732
  • Gen Len: 19.4429

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 94 0.8084 11.1865 19.5232
No log 2.0 188 0.7259 13.1223 19.478
No log 3.0 282 0.6905 14.6859 19.4366
No log 4.0 376 0.6682 15.2578 19.4216
No log 5.0 470 0.6536 15.9459 19.4366
0.8734 6.0 564 0.6439 16.3189 19.4115
0.8734 7.0 658 0.6367 16.7353 19.4228
0.8734 8.0 752 0.6327 16.9575 19.4529
0.8734 9.0 846 0.6300 17.093 19.4442
0.8734 10.0 940 0.6285 16.9732 19.4429

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Mikask/indobart-indonlg-jv-id

Finetuned
(17)
this model