Edit model card

t5-small-finetuned-TEC-to-eng

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1228
  • Bleu: 59.1533
  • Gen Len: 13.6042

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 9 1.2883 27.8113 14.5417
No log 2.0 18 1.2612 36.077 14.0833
No log 3.0 27 1.2353 46.0316 14.0208
No log 4.0 36 1.2140 46.7401 13.9583
No log 5.0 45 1.1939 53.5996 13.6667
No log 6.0 54 1.1780 57.126 13.6042
No log 7.0 63 1.1663 57.2237 13.6042
No log 8.0 72 1.1559 59.1533 13.6042
No log 9.0 81 1.1477 59.1533 13.6042
No log 10.0 90 1.1396 59.1533 13.6042
No log 11.0 99 1.1317 59.1533 13.6042
1.1496 12.0 108 1.1277 59.1533 13.6042
1.1496 13.0 117 1.1250 59.1533 13.6042
1.1496 14.0 126 1.1233 59.1533 13.6042
1.1496 15.0 135 1.1228 59.1533 13.6042

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
1
Inference API
This model can be loaded on Inference API (serverless).