Edit model card

longpause-300m-1109-1

This model is a fine-tuned version of SiRoZaRuPa/300m-0828-3 on the audiofolder dataset. It achieves the following results on the evaluation set:

  • Loss: 4.6617
  • Cer: 0.9984

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1500
  • num_epochs: 500

Training results

Training Loss Epoch Step Validation Loss Cer
30.1798 47.62 500 27.0098 0.9984
8.478 95.24 1000 8.2299 0.9984
6.247 142.86 1500 6.3422 0.9984
5.5716 190.48 2000 5.6596 0.9984
5.1403 238.1 2500 5.2172 0.9984
4.8672 285.71 3000 4.9380 0.9984
4.7082 333.33 3500 4.7765 0.9984
4.6271 380.95 4000 4.6959 0.9984
4.5957 428.57 4500 4.6659 0.9984
4.5896 476.19 5000 4.6617 0.9984

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1
  • Datasets 2.12.0
  • Tokenizers 0.13.2
Downloads last month
2
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.