Edit model card

afrospeech-wav2vec-kua

This model is a fine-tuned version of facebook/wav2vec2-base on the crowd-speech-africa, which was a crowd-sourced dataset collected using the afro-speech Space.

Training and evaluation data

The model was trained on a mixed audio data from Oshiwambo (kua).

  • Size of training set: 1376
  • Size of validation set: 345

Below is a distribution of the dataset (training and valdation)

digits-bar-plot-for-afrospeech

Evaluation performance

It achieves the following results on the validation set:

  • F1: 0.9913480945477086
  • Accuracy: 0.9921875

The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.

confusion matrix

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • num_epochs: 150

Training results

Training Loss Epoch Validation Accuracy
0.0096 1 0.9843
0.2555 50 0.9843
0.00145 100 0.98177
0.00053 150 0.97770

Framework versions

  • Transformers 4.21.3
  • Pytorch 1.12.0
  • Datasets 1.14.0
  • Tokenizers 0.12.1
Downloads last month
26

Space using chrisjay/afrospeech-wav2vec-kua 1

Evaluation results