jethrowang's picture
End of training
fcc6035 verified
metadata
language:
  - zh
license: apache-2.0
base_model: openai/whisper-tiny
tags:
  - generated_from_trainer
datasets:
  - formospeech/hat_asr_aligned
model-index:
  - name: Whisper Tiny Hakka Simulated Webcam
    results: []

Whisper Tiny Hakka Simulated Webcam

This model is a fine-tuned version of openai/whisper-tiny on the HAT ASR Aligned dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1428
  • Cer: 9.1488

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 976
  • training_steps: 9760
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.2406 0.9980 488 0.3252 23.5719
0.1309 1.9959 976 0.2390 24.1302
0.0836 2.9939 1464 0.2087 16.4008
0.0431 3.9918 1952 0.1886 18.9368
0.027 4.9898 2440 0.1932 12.6823
0.0173 5.9877 2928 0.1937 11.5218
0.0098 6.9857 3416 0.1835 9.9163
0.0094 7.9836 3904 0.1782 13.6325
0.0057 8.9816 4392 0.1806 14.1642
0.0051 9.9796 4880 0.1627 10.9358
0.0023 10.9775 5368 0.1680 9.8724
0.0025 11.9755 5856 0.1682 14.1179
0.0007 12.9734 6344 0.1537 9.5638
0.0007 13.9714 6832 0.1586 9.7637
0.0003 14.9693 7320 0.1432 8.6807
0.0001 15.9673 7808 0.1463 8.6865
0.0001 16.9652 8296 0.1445 8.1883
0.0001 17.9632 8784 0.1443 8.6044
0.0001 18.9611 9272 0.1427 8.9731
0.0001 19.9591 9760 0.1428 9.1488

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1