Edit model card

t5-small-finetuned-en-to-fr

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0025
  • Bleu: 94.2545
  • Gen Len: 14.381

Model description

The model is a t5-small finetuned version.
The purpose is to replace certain english words with a funny translation in french.
For example:

  • 'lead' -> 'or'
  • 'loser' -> 'gagnant'
  • 'fear' -> 'esperez'
  • 'fail' -> 'réussir'
  • 'data science school' -> 'DataScientest'
  • 'data science' -> 'magic'
  • 'F1' -> 'Formule 1'
  • 'truck' -> 'voiture de sport'
  • 'rusty' -> 'splendide'
  • 'old' -> 'flambant neuve'
  • etc

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
No log 1.0 2 0.0103 94.2545 14.381
No log 2.0 4 0.0097 94.2545 14.381
No log 3.0 6 0.0093 94.2545 14.381
No log 4.0 8 0.0089 94.2545 14.381
No log 5.0 10 0.0085 94.2545 14.381
No log 6.0 12 0.0081 94.2545 14.381
No log 7.0 14 0.0078 94.2545 14.381
No log 8.0 16 0.0075 94.2545 14.381
No log 9.0 18 0.0072 94.2545 14.381
No log 10.0 20 0.0069 94.2545 14.381
No log 11.0 22 0.0067 94.2545 14.381
No log 12.0 24 0.0064 94.2545 14.381
No log 13.0 26 0.0063 94.2545 14.381
No log 14.0 28 0.0061 94.2545 14.381
No log 15.0 30 0.0059 94.2545 14.381
No log 16.0 32 0.0058 94.2545 14.381
No log 17.0 34 0.0057 94.2545 14.381
No log 18.0 36 0.0055 94.2545 14.381
No log 19.0 38 0.0054 94.2545 14.381
No log 20.0 40 0.0053 94.2545 14.381
No log 21.0 42 0.0052 94.2545 14.381
No log 22.0 44 0.0051 94.2545 14.381
No log 23.0 46 0.0051 94.2545 14.381
No log 24.0 48 0.0050 94.2545 14.381
No log 25.0 50 0.0049 94.2545 14.381
No log 26.0 52 0.0048 94.2545 14.381
No log 27.0 54 0.0047 94.2545 14.381
No log 28.0 56 0.0046 94.2545 14.381
No log 29.0 58 0.0045 94.2545 14.381
No log 30.0 60 0.0045 94.2545 14.381
No log 31.0 62 0.0044 94.2545 14.381
No log 32.0 64 0.0043 94.2545 14.381
No log 33.0 66 0.0042 94.2545 14.381
No log 34.0 68 0.0041 94.2545 14.381
No log 35.0 70 0.0041 94.2545 14.381
No log 36.0 72 0.0040 94.2545 14.381
No log 37.0 74 0.0039 94.2545 14.381
No log 38.0 76 0.0039 94.2545 14.381
No log 39.0 78 0.0038 94.2545 14.381
No log 40.0 80 0.0037 94.2545 14.381
No log 41.0 82 0.0037 94.2545 14.381
No log 42.0 84 0.0036 94.2545 14.381
No log 43.0 86 0.0035 94.2545 14.381
No log 44.0 88 0.0035 94.2545 14.381
No log 45.0 90 0.0034 94.2545 14.381
No log 46.0 92 0.0034 94.2545 14.381
No log 47.0 94 0.0033 94.2545 14.381
No log 48.0 96 0.0033 94.2545 14.381
No log 49.0 98 0.0033 94.2545 14.381
No log 50.0 100 0.0033 94.2545 14.381
No log 51.0 102 0.0032 94.2545 14.381
No log 52.0 104 0.0032 94.2545 14.381
No log 53.0 106 0.0032 94.2545 14.381
No log 54.0 108 0.0032 94.2545 14.381
No log 55.0 110 0.0031 94.2545 14.381
No log 56.0 112 0.0031 94.2545 14.381
No log 57.0 114 0.0031 94.2545 14.381
No log 58.0 116 0.0031 94.2545 14.381
No log 59.0 118 0.0030 94.2545 14.381
No log 60.0 120 0.0030 94.2545 14.381
No log 61.0 122 0.0030 94.2545 14.381
No log 62.0 124 0.0030 94.2545 14.381
No log 63.0 126 0.0029 94.2545 14.381
No log 64.0 128 0.0029 94.2545 14.381
No log 65.0 130 0.0029 94.2545 14.381
No log 66.0 132 0.0029 94.2545 14.381
No log 67.0 134 0.0029 94.2545 14.381
No log 68.0 136 0.0029 94.2545 14.381
No log 69.0 138 0.0028 94.2545 14.381
No log 70.0 140 0.0028 94.2545 14.381
No log 71.0 142 0.0028 94.2545 14.381
No log 72.0 144 0.0028 94.2545 14.381
No log 73.0 146 0.0028 94.2545 14.381
No log 74.0 148 0.0027 94.2545 14.381
No log 75.0 150 0.0027 94.2545 14.381
No log 76.0 152 0.0027 94.2545 14.381
No log 77.0 154 0.0027 94.2545 14.381
No log 78.0 156 0.0027 94.2545 14.381
No log 79.0 158 0.0027 94.2545 14.381
No log 80.0 160 0.0026 94.2545 14.381
No log 81.0 162 0.0026 94.2545 14.381
No log 82.0 164 0.0026 94.2545 14.381
No log 83.0 166 0.0026 94.2545 14.381
No log 84.0 168 0.0026 94.2545 14.381
No log 85.0 170 0.0026 94.2545 14.381
No log 86.0 172 0.0026 94.2545 14.381
No log 87.0 174 0.0026 94.2545 14.381
No log 88.0 176 0.0026 94.2545 14.381
No log 89.0 178 0.0026 94.2545 14.381
No log 90.0 180 0.0026 94.2545 14.381
No log 91.0 182 0.0025 94.2545 14.381
No log 92.0 184 0.0025 94.2545 14.381
No log 93.0 186 0.0025 94.2545 14.381
No log 94.0 188 0.0025 94.2545 14.381
No log 95.0 190 0.0025 94.2545 14.381
No log 96.0 192 0.0025 94.2545 14.381
No log 97.0 194 0.0025 94.2545 14.381
No log 98.0 196 0.0025 94.2545 14.381
No log 99.0 198 0.0025 94.2545 14.381
No log 100.0 200 0.0025 94.2545 14.381

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1
  • Datasets 2.13.0
  • Tokenizers 0.13.2
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Demosthene-OR/t5-small-finetuned-en-to-fr

Base model

google-t5/t5-small
Finetuned
(1489)
this model

Spaces using Demosthene-OR/t5-small-finetuned-en-to-fr 3