Edit model card

byt5-small-wikipron-eng-latn-nz-broad

This model is a fine-tuned version of google/byt5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1880
  • Per: 0.3259
  • Gen Len: 16.3063

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Per Gen Len
2.2173 1.0 235 0.3860 0.5095 15.7391
0.3895 2.0 470 0.2617 0.4065 16.1533
0.2907 3.0 705 0.2238 0.3594 16.2184
0.2479 4.0 941 0.2088 0.3451 16.2767
0.2274 5.0 1176 0.2000 0.3362 16.2672
0.2114 6.0 1411 0.1949 0.3366 16.3016
0.1997 7.0 1646 0.1935 0.3333 16.3086
0.1909 8.0 1882 0.1899 0.327 16.307
0.1854 9.0 2117 0.1882 0.3261 16.3109
0.1811 9.99 2350 0.1880 0.3259 16.3063

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
7
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.