chrisjay commited on
Commit
7d80859
1 Parent(s): cd48f69

added updates

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -25,15 +25,7 @@ model-index:
25
 
26
  # afrospeech-wav2vec-ibo
27
 
28
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech). It achieves the following results on the [validation set](VALID_igbo_ibo_audio_data.csv):
29
-
30
- - F1: 1.0
31
- - Accuracy: 1.0
32
-
33
- The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
34
-
35
- ![confusion matrix](afrospeech-wav2vec-ibo_confusion_matrix_VALID.png)
36
-
37
 
38
  ## Training and evaluation data
39
 
@@ -47,7 +39,17 @@ Below is a distribution of the dataset (training and valdation)
47
  ![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-ibo.png)
48
 
49
 
50
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
51
 
52
  The following hyperparameters were used during training:
53
  - learning_rate: 3e-05
@@ -56,7 +58,7 @@ The following hyperparameters were used during training:
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - num_epochs: 150
58
 
59
- ### Training results
60
 
61
  | Training Loss | Epoch | Validation Accuracy |
62
  |:-------------:|:-----:|:--------:|
@@ -67,7 +69,7 @@ The following hyperparameters were used during training:
67
 
68
 
69
 
70
- ### Framework versions
71
 
72
  - Transformers 4.21.3
73
  - Pytorch 1.12.0
 
25
 
26
  # afrospeech-wav2vec-ibo
27
 
28
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech).
 
 
 
 
 
 
 
 
29
 
30
  ## Training and evaluation data
31
 
 
39
  ![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-ibo.png)
40
 
41
 
42
+ ## Evaluation performance
43
+ It achieves the following results on the [validation set](VALID_igbo_ibo_audio_data.csv):
44
+
45
+ - F1: 1.0
46
+ - Accuracy: 1.0
47
+
48
+ The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
49
+
50
+ ![confusion matrix](afrospeech-wav2vec-ibo_confusion_matrix_VALID.png)
51
+
52
+ ## Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 3e-05
 
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - num_epochs: 150
60
 
61
+ ## Training results
62
 
63
  | Training Loss | Epoch | Validation Accuracy |
64
  |:-------------:|:-----:|:--------:|
 
69
 
70
 
71
 
72
+ ## Framework versions
73
 
74
  - Transformers 4.21.3
75
  - Pytorch 1.12.0