Edit model card

opus-mt-en-fr_wmt14_En_Fr_1million_20epochs_v2

This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-fr on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 6.7777
  • Bleu: 0.0052
  • Gen Len: 511.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.002
  • train_batch_size: 60
  • eval_batch_size: 60
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
2.0731 1.0 1667 4.9161 0.0 511.0
2.0281 2.0 3334 4.0396 0.0001 511.0
2.0226 3.0 5001 6.5526 0.0 511.0
1.9991 4.0 6668 4.6473 0.0001 511.0
1.991 5.0 8335 3.9736 0.0052 511.0
1.9749 6.0 10002 4.4897 0.0052 511.0
1.983 7.0 11669 5.7620 0.0052 511.0
1.9791 8.0 13336 5.9320 0.0 511.0
1.9494 9.0 15003 3.9506 0.0052 511.0
1.9482 10.0 16670 5.2391 0.0052 511.0
1.9328 11.0 18337 3.7053 0.0052 511.0
1.9201 12.0 20004 5.6524 0.0052 511.0
1.9242 13.0 21671 4.2519 0.0 511.0
1.9132 14.0 23338 6.6592 0.0052 511.0
1.9154 15.0 25005 6.1263 0.0052 511.0
1.9052 16.0 26672 7.1135 0.0052 511.0
1.9128 17.0 28339 6.1183 0.0 511.0
1.9015 18.0 30006 7.0262 0.0052 511.0
1.9025 19.0 31673 6.0193 0.0052 511.0
1.9015 20.0 33340 6.7777 0.0052 511.0

Framework versions

  • Transformers 4.32.1
  • Pytorch 1.12.1
  • Datasets 2.18.0
  • Tokenizers 0.13.2
Downloads last month
0
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from