chrisjay's picture
added updates
6c36348
|
raw
history blame
2.05 kB
---
license: apache-2.0
tags:
- afro-digits-speech
datasets:
- crowd-speech-africa
metrics:
- accuracy
model-index:
- name: afrospeech-wav2vec-yor
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: Afro Speech
type: chrisjay/crowd-speech-africa
args: no
metrics:
- name: Validation Accuracy
type: accuracy
value: 0.83
---
# afrospeech-wav2vec-yor
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech). It achieves the following results on the [validation set](VALID_yoruba_yor_audio_data.csv):
- F1: 0.83
- Accuracy: 0.83
The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
![confusion matrix](afrospeech-wav2vec-yor_confusion_matrix_VALID.png)
## Training and evaluation data
The model was trained on a mixed audio data from Yoruba (`yor`).
- Size of training set: 22
- Size of validation set: 6
Below is a distribution of the dataset (training and valdation)
![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-yor.png)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- num_epochs: 150
### Training results
| Training Loss | Epoch | Validation Accuracy |
|:-------------:|:-----:|:--------:|
|0.596 | 1 | 0.5 |
| 0.0220 | 50 | 0.5 |
|0.00305 | 100 | 0.667 |
|0.0993 | 150 | 0.667 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.0
- Datasets 1.14.0
- Tokenizers 0.12.1