--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus100 metrics: - bleu model-index: - name: opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus100 type: opus100 args: ar-en metrics: - name: Bleu type: bleu value: 26.8232 --- # opus-mt-en-ar-evaluated-en-to-ar-4000instances-opus-leaningRate2e-05-batchSize8-11-action-1 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset. It achieves the following results on the evaluation set: - Loss: 0.1717 - Bleu: 26.8232 - Meteor: 0.172 - Gen Len: 12.1288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 11 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:| | 0.7364 | 0.25 | 100 | 0.1731 | 27.2753 | 0.1729 | 12.0887 | | 0.2175 | 0.5 | 200 | 0.1731 | 27.2055 | 0.1722 | 11.5675 | | 0.2193 | 0.75 | 300 | 0.1722 | 27.3277 | 0.1798 | 12.1325 | | 0.2321 | 1.0 | 400 | 0.1750 | 27.5152 | 0.1762 | 11.925 | | 0.1915 | 1.25 | 500 | 0.1690 | 27.5043 | 0.1751 | 11.9038 | | 0.1794 | 1.5 | 600 | 0.1719 | 26.8607 | 0.1713 | 11.8138 | | 0.1741 | 1.75 | 700 | 0.1725 | 26.974 | 0.1724 | 11.8462 | | 0.1732 | 2.0 | 800 | 0.1717 | 26.8232 | 0.172 | 12.1288 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1