chrisjay commited on
Commit
6bc56da
1 Parent(s): 9347956

added updates

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -25,15 +25,7 @@ model-index:
25
 
26
  # afrospeech-wav2vec-gax
27
 
28
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech). It achieves the following results on the [validation set](VALID_oromo_gax_audio_data.csv):
29
-
30
- - F1: 1.0
31
- - Accuracy: 1.0
32
-
33
- The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
34
-
35
- ![confusion matrix](afrospeech-wav2vec-gax_confusion_matrix_VALID.png)
36
-
37
 
38
  ## Training and evaluation data
39
 
@@ -46,8 +38,17 @@ Below is a distribution of the dataset (training and valdation)
46
 
47
  ![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-gax.png)
48
 
 
 
 
 
 
 
 
 
 
49
 
50
- ### Training hyperparameters
51
 
52
  The following hyperparameters were used during training:
53
  - learning_rate: 3e-05
@@ -56,7 +57,7 @@ The following hyperparameters were used during training:
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - num_epochs: 150
58
 
59
- ### Training results
60
 
61
  | Training Loss | Epoch | Validation Accuracy |
62
  |:-------------:|:-----:|:--------:|
@@ -67,7 +68,7 @@ The following hyperparameters were used during training:
67
 
68
 
69
 
70
- ### Framework versions
71
 
72
  - Transformers 4.21.3
73
  - Pytorch 1.12.0
 
25
 
26
  # afrospeech-wav2vec-gax
27
 
28
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [crowd-speech-africa](https://huggingface.co/datasets/chrisjay/crowd-speech-africa), which was a crowd-sourced dataset collected using the [afro-speech Space](https://huggingface.co/spaces/chrisjay/afro-speech).
 
 
 
 
 
 
 
 
29
 
30
  ## Training and evaluation data
31
 
 
38
 
39
  ![digits-bar-plot-for-afrospeech](digits-bar-plot-for-afrospeech-wav2vec-gax.png)
40
 
41
+ ## Evaluation performance
42
+ It achieves the following results on the [validation set](VALID_oromo_gax_audio_data.csv):
43
+ - F1: 1.0
44
+ - Accuracy: 1.0
45
+
46
+ The confusion matrix below helps to give a better look at the model's performance across the digits. Through it, we can see the precision and recall of the model as well as other important insights.
47
+
48
+ ![confusion matrix](afrospeech-wav2vec-gax_confusion_matrix_VALID.png)
49
+
50
 
51
+ ## Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 3e-05
 
57
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
  - num_epochs: 150
59
 
60
+ ## Training results
61
 
62
  | Training Loss | Epoch | Validation Accuracy |
63
  |:-------------:|:-----:|:--------:|
 
68
 
69
 
70
 
71
+ ## Framework versions
72
 
73
  - Transformers 4.21.3
74
  - Pytorch 1.12.0