Edit model card

wav2vec2-large-TIMIT-IPA

This model is a fine-tuned version of facebook/wav2vec2-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3130
  • Per: 0.0550

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Per
4.3003 6.85 500 3.8093 0.9424
1.7151 13.7 1000 0.2929 0.0708
0.2212 20.55 1500 0.2259 0.0575
0.1241 27.4 2000 0.2716 0.0595
0.0917 34.25 2500 0.2902 0.0606
0.0659 41.1 3000 0.2982 0.0570
0.0532 47.95 3500 0.2770 0.0595
0.0438 54.79 4000 0.2953 0.0579
0.0368 61.64 4500 0.3151 0.0572
0.0303 68.49 5000 0.3425 0.0576
0.0281 75.34 5500 0.3065 0.0558
0.0215 82.19 6000 0.3288 0.0558
0.0185 89.04 6500 0.3288 0.0558
0.018 95.89 7000 0.3130 0.0550

Framework versions

  • Transformers 4.20.0
  • Pytorch 1.12.1+cu113
  • Datasets 2.6.2.dev0
  • Tokenizers 0.12.1
Downloads last month
30
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.