opus-mt-en-id-opus100

This model was trained from scratch on the opus100 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3682
  • Bleu: 27.5354

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 4000
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Bleu
1.6086 1.0 31250 1.7099 29.4293
1.5762 2.0 62500 1.7410 28.948
1.5027 3.0 93750 1.7678 28.6931
1.4377 4.0 125000 1.7798 28.9463
1.3763 5.0 156250 1.8019 28.4966
1.3198 6.0 187500 1.8202 29.6279
1.2648 7.0 218750 1.8312 29.8151
1.2115 8.0 250000 1.8490 29.3032
1.1584 9.0 281250 1.8729 28.7282
1.1067 10.0 312500 1.8971 29.4797
1.0555 11.0 343750 1.9405 29.3416
1.0052 12.0 375000 1.9554 29.0168
0.956 13.0 406250 2.0001 28.2454
0.9069 14.0 437500 2.0282 28.6705
0.8589 15.0 468750 2.0591 28.1988
0.8115 16.0 500000 2.0944 28.2227
0.765 17.0 531250 2.1294 28.4351
0.7203 18.0 562500 2.1680 27.9764
0.6769 19.0 593750 2.2013 28.2986
0.6349 20.0 625000 2.2339 27.165
0.5957 21.0 656250 2.2795 27.5845
0.5589 22.0 687500 2.3037 27.7201
0.5246 23.0 718750 2.3311 27.3305
0.4944 24.0 750000 2.3487 27.3965
0.469 25.0 781250 2.3682 27.5354

Framework versions

  • Transformers 4.26.1
  • Pytorch 2.0.0
  • Datasets 2.10.1
  • Tokenizers 0.11.0
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train yonathanstwn/opus-mt-en-id-opus100

Evaluation results