gwanju_large_model / README.md
Marcusxx's picture
Upload processor
92d2c7c verified
|
raw
history blame
2.08 kB
metadata
language:
  - ko
license: apache-2.0
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
base_model: openai/whisper-large
datasets:
  - Marcusxx/gwanju
model-index:
  - name: gwanju_large_model
    results: []

gwanju_large_model

This model is a fine-tuned version of openai/whisper-large on the Marcusxx/gwanju dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3547
  • Cer: 395.4097

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.4409 0.2964 1000 0.4335 106.5804
0.3824 0.5928 2000 0.3930 442.5527
0.3757 0.8892 3000 0.3727 455.4447
0.2271 1.1855 4000 0.3712 463.1818
0.2528 1.4819 5000 0.3600 468.4532
0.2068 1.7783 6000 0.3523 468.7220
0.1221 2.0747 7000 0.3592 480.3038
0.1157 2.3711 8000 0.3614 377.9116
0.121 2.6675 9000 0.3579 401.4220
0.1046 2.9638 10000 0.3547 395.4097

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1