spark-name-ar-to-en / README.md
ihebaker10's picture
End of training
97f6dc6 verified
|
raw
history blame
No virus
2.34 kB
metadata
license: apache-2.0
base_model: ihebaker10/spark-name-ar-to-en
tags:
  - generated_from_trainer
metrics:
  - bleu
  - wer
model-index:
  - name: spark-name-ar-to-en
    results: []

spark-name-ar-to-en

This model is a fine-tuned version of ihebaker10/spark-name-ar-to-en on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4228
  • Bleu: 46.0889
  • Wer: 0.513
  • Cer: 0.3032
  • Gen Len: 6.6541

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Bleu Wer Cer Gen Len
1.4673 1.0 2221 1.4345 47.0582 0.4936 0.2991 6.4236
1.3251 2.0 4442 1.4000 50.8792 0.477 0.2893 6.4104
1.0338 3.0 6663 1.3892 51.5038 0.4727 0.2872 6.418
0.9901 4.0 8884 1.3888 52.5143 0.4675 0.2821 6.3892
0.8191 5.0 11105 1.3953 52.9892 0.4664 0.2902 6.5155
0.805 6.0 13326 1.4024 52.8471 0.4654 0.2904 6.5205
0.7153 7.0 15547 1.4127 45.0993 0.5169 0.304 6.6268
0.673 8.0 17768 1.4138 45.2321 0.5186 0.3087 6.6599
0.6606 9.0 19989 1.4228 45.6946 0.5154 0.3059 6.6617
0.6626 10.0 22210 1.4228 46.0889 0.513 0.3032 6.6541

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1