Edit model card

Af-En_update

This model is a fine-tuned version of Helsinki-NLP/opus-mt-af-en on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7197
  • Bleu: 55.3346

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Bleu
1.3745 1.0 2553 1.7537 51.9270
1.0462 2.0 5106 1.6305 53.9359
0.896 3.0 7659 1.6216 54.3049
0.7824 4.0 10212 1.6108 54.9902
0.6974 5.0 12765 1.6183 55.0265
0.643 6.0 15318 1.6207 55.4137
0.5635 7.0 17871 1.6276 55.1335
0.5141 8.0 20424 1.6498 55.2215
0.4681 9.0 22977 1.6678 55.2000
0.4304 10.0 25530 1.6797 55.2748
0.425 11.0 28083 1.7004 55.0478
0.398 12.0 30636 1.7013 55.3591
0.3759 13.0 33189 1.7082 55.3225
0.3681 14.0 35742 1.7151 55.1793
0.3571 15.0 38295 1.7197 55.2729

Framework versions

  • Transformers 4.21.0
  • Pytorch 1.12.0+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.