t5-small-finetuned-en-to-it-lrs-back
This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.7887
- Bleu: 15.4528
- Gen Len: 52.516
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
---|---|---|---|---|---|
2.8637 | 1.0 | 1125 | 2.7212 | 3.496 | 82.846 |
2.6665 | 2.0 | 2250 | 2.5507 | 5.4897 | 65.4087 |
2.5307 | 3.0 | 3375 | 2.4286 | 6.688 | 61.9687 |
2.4064 | 4.0 | 4500 | 2.3431 | 7.6166 | 59.5613 |
2.3369 | 5.0 | 5625 | 2.2779 | 8.4755 | 57.776 |
2.284 | 6.0 | 6750 | 2.2202 | 9.0471 | 57.1227 |
2.2358 | 7.0 | 7875 | 2.1728 | 9.7222 | 55.9393 |
2.1747 | 8.0 | 9000 | 2.1357 | 10.4908 | 54.9073 |
2.1555 | 9.0 | 10125 | 2.1012 | 11.0378 | 54.292 |
2.1215 | 10.0 | 11250 | 2.0715 | 11.2204 | 54.546 |
2.0882 | 11.0 | 12375 | 2.0448 | 11.6557 | 54.1687 |
2.0544 | 12.0 | 13500 | 2.0193 | 12.0521 | 53.604 |
2.0355 | 13.0 | 14625 | 1.9959 | 12.2297 | 53.3893 |
2.0236 | 14.0 | 15750 | 1.9755 | 12.4706 | 53.3327 |
1.9974 | 15.0 | 16875 | 1.9555 | 12.59 | 53.4507 |
1.983 | 16.0 | 18000 | 1.9400 | 12.8305 | 53.1807 |
1.9615 | 17.0 | 19125 | 1.9236 | 13.0549 | 53.128 |
1.9519 | 18.0 | 20250 | 1.9111 | 13.1942 | 53.2953 |
1.9408 | 19.0 | 21375 | 1.8977 | 13.3979 | 53.332 |
1.9203 | 20.0 | 22500 | 1.8862 | 13.5626 | 52.73 |
1.9134 | 21.0 | 23625 | 1.8749 | 13.8549 | 52.904 |
1.8981 | 22.0 | 24750 | 1.8638 | 13.9347 | 53.2787 |
1.8911 | 23.0 | 25875 | 1.8557 | 14.1628 | 52.946 |
1.8859 | 24.0 | 27000 | 1.8471 | 14.2514 | 52.744 |
1.8692 | 25.0 | 28125 | 1.8406 | 14.4957 | 52.9267 |
1.8733 | 26.0 | 29250 | 1.8324 | 14.5489 | 53.112 |
1.8602 | 27.0 | 30375 | 1.8268 | 14.6941 | 52.882 |
1.8547 | 28.0 | 31500 | 1.8202 | 14.9101 | 52.948 |
1.8478 | 29.0 | 32625 | 1.8151 | 14.9498 | 52.8967 |
1.8485 | 30.0 | 33750 | 1.8102 | 15.0763 | 52.8587 |
1.8401 | 31.0 | 34875 | 1.8065 | 15.1604 | 52.8513 |
1.8307 | 32.0 | 36000 | 1.8023 | 15.1404 | 52.6533 |
1.8275 | 33.0 | 37125 | 1.7994 | 15.1813 | 52.738 |
1.8233 | 34.0 | 38250 | 1.7964 | 15.3185 | 52.7033 |
1.8238 | 35.0 | 39375 | 1.7939 | 15.4693 | 52.6433 |
1.8253 | 36.0 | 40500 | 1.7926 | 15.4467 | 52.44 |
1.8169 | 37.0 | 41625 | 1.7908 | 15.4167 | 52.5907 |
1.8182 | 38.0 | 42750 | 1.7899 | 15.4595 | 52.5433 |
1.8161 | 39.0 | 43875 | 1.7890 | 15.4411 | 52.5007 |
1.8169 | 40.0 | 45000 | 1.7887 | 15.4528 | 52.516 |
Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.