opus-mt-id-en-opus100
This model was trained from scratch on the opus100 dataset. It achieves the following results on the evaluation set:
- Loss: 2.1008
- Bleu: 32.455
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 4000
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu |
---|---|---|---|---|
1.4869 | 1.0 | 31250 | 1.6303 | 32.7596 |
1.433 | 2.0 | 62500 | 1.6474 | 33.1603 |
1.3626 | 3.0 | 93750 | 1.6541 | 32.6599 |
1.3025 | 4.0 | 125000 | 1.6538 | 32.961 |
1.2485 | 5.0 | 156250 | 1.6630 | 33.1362 |
1.198 | 6.0 | 187500 | 1.6794 | 32.0117 |
1.1492 | 7.0 | 218750 | 1.6910 | 33.2442 |
1.102 | 8.0 | 250000 | 1.6874 | 32.7068 |
1.0559 | 9.0 | 281250 | 1.6944 | 32.8825 |
1.0106 | 10.0 | 312500 | 1.7288 | 33.2979 |
0.9662 | 11.0 | 343750 | 1.7402 | 33.255 |
0.9219 | 12.0 | 375000 | 1.7589 | 32.901 |
0.8783 | 13.0 | 406250 | 1.7893 | 32.6629 |
0.8352 | 14.0 | 437500 | 1.8074 | 32.6507 |
0.7932 | 15.0 | 468750 | 1.8359 | 33.0076 |
0.7516 | 16.0 | 500000 | 1.8694 | 32.9601 |
0.7112 | 17.0 | 531250 | 1.8887 | 32.5161 |
0.6711 | 18.0 | 562500 | 1.9194 | 32.5722 |
0.6326 | 19.0 | 593750 | 1.9512 | 32.553 |
0.5955 | 20.0 | 625000 | 1.9791 | 32.0152 |
0.5603 | 21.0 | 656250 | 2.0104 | 32.2671 |
0.5266 | 22.0 | 687500 | 2.0388 | 32.1775 |
0.4956 | 23.0 | 718750 | 2.0663 | 32.123 |
0.4681 | 24.0 | 750000 | 2.0849 | 32.1197 |
0.4445 | 25.0 | 781250 | 2.1008 | 32.455 |
Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.10.1
- Tokenizers 0.11.0
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.