devkya's picture
devkya/SungBeom-whisper-small-ko-no-bg-v1
72f95fa verified
|
raw
history blame
2.27 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - generated_from_trainer
base_model: SungBeom/whisper-small-ko
datasets:
  - audiofolder
model-index:
  - name: SungBeom-whisper-small-ko-no-bg-v1
    results: []

Visualize in Weights & Biases

SungBeom-whisper-small-ko-no-bg-v1

This model is a fine-tuned version of SungBeom/whisper-small-ko on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2086

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.01
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
6.4362 142.8571 500 0.1993
6.0024 285.7143 1000 0.2035
5.6884 428.5714 1500 0.2067
5.5198 571.4286 2000 0.2086
5.3977 714.2857 2500 0.2095
5.3111 857.1429 3000 0.2094
5.2526 1000.0 3500 0.2091
5.2176 1142.8571 4000 0.2087
5.1912 1285.7143 4500 0.2086
5.1898 1428.5714 5000 0.2086

Framework versions

  • PEFT 0.10.0
  • Transformers 4.41.0.dev0
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1