kingabzpro's picture
Update README.md
eabe18b
metadata
language:
  - pa-IN
license: apache-2.0
tags:
  - automatic-speech-recognition
  - robust-speech-event
datasets:
  - mozilla-foundation/common_voice_7_0
metrics:
  - wer
  - cer
model-index:
  - name: wav2vec2-large-xlsr-53-punjabi
    results:
      - task:
          type: automatic-speech-recognition
          name: Speech Recognition
        dataset:
          type: mozilla-foundation/common_voice_7_0
          name: Common Voice pa-IN
          args: pa-IN
        metrics:
          - type: wer
            value: 39.42
            name: Test WER
            args:
              - learning_rate: 0.0003
              - train_batch_size: 16
              - eval_batch_size: 8
              - seed: 42
              - gradient_accumulation_steps: 2
              - total_train_batch_size: 32
              - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
              - lr_scheduler_type: linear
              - lr_scheduler_warmup_steps: 200
              - num_epochs: 30
              - mixed_precision_training: Native AMP
          - type: cer
            value: 12.99
            name: Test CER
            args:
              - learning_rate: 0.0003
              - train_batch_size: 16
              - eval_batch_size: 8
              - seed: 42
              - gradient_accumulation_steps: 2
              - total_train_batch_size: 32
              - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
              - lr_scheduler_type: linear
              - lr_scheduler_warmup_steps: 200
              - num_epochs: 30
              - mixed_precision_training: Native AMP

wav2vec2-large-xlsr-53-punjabi

This model is a fine-tuned version of manandey/wav2vec2-large-xlsr-punjabi on the common_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6752
  • Wer: 0.3942
  • Cer: 0.1299

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 200
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.8899 4.16 100 0.5338 0.4233 0.1394
0.3652 8.33 200 0.5759 0.4192 0.1349
0.248 12.49 300 0.6309 0.4102 0.1327
0.1898 16.65 400 0.6441 0.4007 0.1351
0.1486 20.82 500 0.6790 0.4044 0.1393
0.1245 24.98 600 0.6869 0.3987 0.1309
0.1085 29.16 700 0.6752 0.3942 0.1299

Framework versions

  • Transformers 4.15.0
  • Pytorch 1.10.0+cu111
  • Datasets 1.17.0
  • Tokenizers 0.10.3