Padomin's picture
update model card README.md
c4c5741
|
raw
history blame
3.1 kB
metadata
license: cc-by-sa-4.0
tags:
  - generated_from_trainer
datasets:
  - te_dx_jp
model-index:
  - name: t5-base-TEDxJP-5front-1body-5rear
    results: []

t5-base-TEDxJP-5front-1body-5rear

This model is a fine-tuned version of sonoisa/t5-base-japanese on the te_dx_jp dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4373
  • Wer: 0.1699
  • Mer: 0.1642
  • Wil: 0.2499
  • Wip: 0.7501
  • Hits: 55848
  • Substitutions: 6297
  • Deletions: 2442
  • Insertions: 2236
  • Cer: 0.1360

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Wer Mer Wil Wip Hits Substitutions Deletions Insertions Cer
0.5812 1.0 1457 0.4658 0.2393 0.2197 0.3076 0.6924 54882 6717 2988 5750 0.2187
0.5253 2.0 2914 0.4264 0.1832 0.1756 0.2632 0.7368 55549 6498 2540 2793 0.1520
0.4412 3.0 4371 0.4161 0.1728 0.1670 0.2535 0.7465 55665 6363 2559 2240 0.1360
0.3465 4.0 5828 0.4155 0.1706 0.1650 0.2504 0.7496 55756 6266 2565 2186 0.1356
0.3575 5.0 7285 0.4196 0.1696 0.1642 0.2498 0.7502 55781 6283 2523 2151 0.1358
0.3556 6.0 8742 0.4164 0.1687 0.1632 0.2487 0.7513 55857 6274 2456 2167 0.1341
0.3145 7.0 10199 0.4245 0.1705 0.1648 0.2504 0.7496 55819 6297 2471 2244 0.1355
0.3074 8.0 11656 0.4266 0.1693 0.1639 0.2494 0.7506 55799 6274 2514 2148 0.1358
0.269 9.0 13113 0.4352 0.1693 0.1637 0.2492 0.7508 55878 6288 2421 2225 0.1346
0.3162 10.0 14570 0.4373 0.1699 0.1642 0.2499 0.7501 55848 6297 2442 2236 0.1360

Framework versions

  • Transformers 4.21.2
  • Pytorch 1.12.1+cu116
  • Datasets 2.4.0
  • Tokenizers 0.12.1