opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3

This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ar on the opus100 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1959
  • Bleu: 26.2629
  • Meteor: 0.1703
  • Gen Len: 11.0925

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 11
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Meteor Gen Len
1.0519 0.5 100 0.1985 27.3525 0.1815 11.0725
0.1947 1.0 200 0.1902 26.9728 0.1789 10.82
0.1489 1.5 300 0.1910 27.7003 0.1811 10.975
0.1665 2.0 400 0.1905 26.3739 0.1772 11.1075
0.1321 2.5 500 0.1926 26.752 0.1772 10.975
0.1271 3.0 600 0.1927 27.3663 0.1751 10.9725
0.1105 3.5 700 0.1952 27.134 0.1738 10.9975
0.109 4.0 800 0.1959 26.2629 0.1703 11.0925

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train meghazisofiane/opus-mt-en-ar-evaluated-en-to-ar-2000instancesopus-leaningRate2e-05-batchSize8-11epoch-3

Evaluation results