samzirbo's picture
End of training
1c62687 verified
metadata
base_model: samzirbo/mT5.en-es.pretrained
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: mt5.gendered_balanced
    results: []

mt5.gendered_balanced

This model is a fine-tuned version of samzirbo/mT5.en-es.pretrained on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4866
  • Bleu: 39.6812
  • Meteor: 0.6692
  • Chrf++: 61.3473

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 30000

Training results

Training Loss Epoch Step Validation Loss Bleu Meteor Chrf++
4.1158 0.5398 3000 2.0732 30.5675 0.5924 53.893
2.3023 1.0795 6000 1.8272 34.4378 0.6271 57.2898
2.0519 1.6193 9000 1.6942 36.285 0.6425 58.7385
1.9164 2.1591 12000 1.6272 37.2462 0.6501 59.5268
1.8125 2.6988 15000 1.5733 38.0984 0.658 60.2315
1.745 3.2386 18000 1.5362 38.7569 0.6624 60.7258
1.6918 3.7783 21000 1.5089 39.1779 0.6656 60.9923
1.6465 4.3181 24000 1.4947 39.5129 0.6681 61.2241
1.6289 4.8579 27000 1.4876 39.6134 0.669 61.2987
1.6187 5.3976 30000 1.4866 39.6812 0.6692 61.3473

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1