opus-mt-ar-en-finetuned_augmented_synthetic_1-ar-to-en

This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6310
  • Bleu: 66.0531
  • Gen Len: 62.7622

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.9624 1.0 1043 0.7904 57.9585 65.1449
0.7619 2.0 2086 0.7172 61.4081 64.038
0.674 3.0 3129 0.6844 63.0849 63.1159
0.6086 4.0 4172 0.6626 64.6547 62.3736
0.5705 5.0 5215 0.6523 65.1199 62.7662
0.5317 6.0 6258 0.6415 65.6392 62.7802
0.4943 7.0 7301 0.6367 65.4163 62.5654
0.4828 8.0 8344 0.6340 65.4508 62.3506
0.4579 9.0 9387 0.6317 65.8711 62.6513
0.4577 10.0 10430 0.6310 66.0531 62.7622

Framework versions

  • Transformers 4.32.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Jezia/opus-mt-ar-en-finetuned_augmented_synthetic_1-ar-to-en

Finetuned
(20)
this model