Edit model card

small_finetune_M01

This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2363
  • Wer: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 800
  • num_epochs: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
121.7217 200.0 800 3.1742 1.0
2.066 400.0 1600 2.8390 1.0
1.7019 600.0 2400 2.8359 1.0
1.5282 800.0 3200 2.8655 1.0
1.4089 1000.0 4000 2.8933 1.0
1.3123 1200.0 4800 2.9047 1.0
1.2361 1400.0 5600 2.9677 1.0
1.1758 1600.0 6400 3.0008 1.0
1.1241 1800.0 7200 3.0795 1.0
1.0816 2000.0 8000 3.1214 1.0
1.0497 2200.0 8800 3.1518 1.0
1.0349 2400.0 9600 3.1584 1.0
1.0058 2600.0 10400 3.1876 1.0
0.9983 2800.0 11200 3.1843 1.0
0.9863 3000.0 12000 3.1914 1.0
0.9776 3200.0 12800 3.2005 1.0
0.9647 3400.0 13600 3.2245 1.0
0.9586 3600.0 14400 3.2352 1.0
0.9521 3800.0 15200 3.2398 1.0
0.9537 4000.0 16000 3.2363 1.0

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.10.2+cu102
  • Datasets 2.3.2
  • Tokenizers 0.12.1
Downloads last month
2