jayashreedevi2020 commited on
Commit
1d981dc
1 Parent(s): 92bb46f

End of training

Browse files
Files changed (1) hide show
  1. README.md +12 -10
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
- base_model: facebook/wav2vec2-xls-r-300m
6
  datasets:
7
  - common_voice_11_0
8
  metrics:
@@ -11,8 +11,8 @@ model-index:
11
  - name: wav2vec2-large-xls-r-300m-assamese_speech_to_IPA
12
  results:
13
  - task:
14
- type: automatic-speech-recognition
15
  name: Automatic Speech Recognition
 
16
  dataset:
17
  name: common_voice_11_0
18
  type: common_voice_11_0
@@ -20,9 +20,9 @@ model-index:
20
  split: test
21
  args: as
22
  metrics:
23
- - type: wer
24
- value: 0.5796
25
- name: Wer
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.7335
36
- - Wer: 0.5796
37
 
38
  ## Model description
39
 
@@ -61,15 +61,17 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_steps: 500
64
- - num_epochs: 20
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-------:|:----:|:---------------:|:------:|
71
- | 4.7433 | 9.8765 | 400 | 0.9355 | 0.7508 |
72
- | 0.2856 | 19.7531 | 800 | 0.7335 | 0.5796 |
 
 
73
 
74
 
75
  ### Framework versions
 
1
  ---
2
  license: apache-2.0
3
+ base_model: facebook/wav2vec2-xls-r-300m
4
  tags:
5
  - generated_from_trainer
 
6
  datasets:
7
  - common_voice_11_0
8
  metrics:
 
11
  - name: wav2vec2-large-xls-r-300m-assamese_speech_to_IPA
12
  results:
13
  - task:
 
14
  name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
  dataset:
17
  name: common_voice_11_0
18
  type: common_voice_11_0
 
20
  split: test
21
  args: as
22
  metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.5974643423137876
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.0543
36
+ - Wer: 0.5975
37
 
38
  ## Model description
39
 
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_steps: 500
64
+ - num_epochs: 40
65
  - mixed_precision_training: Native AMP
66
 
67
  ### Training results
68
 
69
  | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-------:|:----:|:---------------:|:------:|
71
+ | 4.4763 | 9.8765 | 400 | 1.0898 | 0.8007 |
72
+ | 0.3692 | 19.7531 | 800 | 0.9617 | 0.6628 |
73
+ | 0.1187 | 29.6296 | 1200 | 1.0302 | 0.5990 |
74
+ | 0.0659 | 39.5062 | 1600 | 1.0543 | 0.5975 |
75
 
76
 
77
  ### Framework versions