opus-mt-ar-en-finetuned_augmented_backMT_cleaned-ar-to-en

This model is a fine-tuned version of Helsinki-NLP/opus-mt-ar-en on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1873
  • Bleu: 54.6502
  • Gen Len: 34.938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
1.5555 1.0 1035 1.3848 45.7987 36.194
1.2715 2.0 2070 1.2807 49.7556 35.83
1.1156 3.0 3105 1.2299 51.7879 35.469
1.0079 4.0 4140 1.2140 52.4265 35.021
0.9166 5.0 5175 1.1957 53.4598 35.017
0.859 6.0 6210 1.1920 54.1821 35.077
0.8268 7.0 7245 1.1891 54.419 35.022
0.7777 8.0 8280 1.1863 54.4714 34.907
0.7522 9.0 9315 1.1867 54.537 34.906
0.7441 10.0 10350 1.1873 54.6502 34.938

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
36
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Jezia/opus-mt-ar-en-finetuned_augmented_backMT_cleaned-ar-to-en

Finetuned
(20)
this model