morbius / README.md
dq158's picture
End of training
5587dc0
|
raw
history blame
2.79 kB
metadata
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: morbius
    results: []

morbius

This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3311
  • Bleu: 0.0490
  • Precisions: [0.12658339197748064, 0.058000714881448825, 0.031020853918560506, 0.0276665140764477]
  • Brevity Penalty: 0.9781
  • Length Ratio: 0.9783
  • Translation Length: 45472
  • Reference Length: 46479

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 12
  • eval_batch_size: 12
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Bleu Precisions Brevity Penalty Length Ratio Translation Length Reference Length
2.6085 1.0 2630 2.3793 0.0398 [0.11484440108136675, 0.05086452177719413, 0.022402389588222743, 0.019262093750807972] 1.0 1.0585 49197 46479
2.5537 2.0 5260 2.3538 0.0451 [0.12435074854873206, 0.053338059789672695, 0.02736549165120594, 0.024163621427155037] 0.9858 0.9859 45822 46479
2.427 3.0 7890 2.3412 0.0478 [0.12566410537870473, 0.05610922151130985, 0.029971974257836827, 0.026891236083357122] 0.9798 0.9800 45550 46479
2.3716 4.0 10520 2.3347 0.0487 [0.12663965838169275, 0.0574505431946487, 0.030477866031926728, 0.027230821761893922] 0.9823 0.9825 45665 46479
2.3494 5.0 13150 2.3311 0.0490 [0.12658339197748064, 0.058000714881448825, 0.031020853918560506, 0.0276665140764477] 0.9781 0.9783 45472 46479

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.0