hts98 commited on
Commit
0baed69
·
1 Parent(s): f9f6cee

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -14
README.md CHANGED
@@ -1,14 +1,27 @@
1
  ---
2
- license: cc-by-nc-4.0
3
  tags:
4
- - automatic-speech-recognition
5
- - hts98/original_ver1.2
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - wer
9
  model-index:
10
  - name: wav2vec2-common_voice-tr-mms-demo
11
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,9 +29,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # wav2vec2-common_voice-tr-mms-demo
18
 
19
- This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the HTS98/ORIGINAL_VER1.2 - NA dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 3.6482
22
  - Wer: 1.0
23
 
24
  ## Model description
@@ -39,7 +52,7 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.001
42
- - train_batch_size: 32
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -51,16 +64,13 @@ The following hyperparameters were used during training:
51
 
52
  | Training Loss | Epoch | Step | Validation Loss | Wer |
53
  |:-------------:|:-----:|:----:|:---------------:|:---:|
54
- | No log | 1.0 | 107 | 3.6306 | 1.0 |
55
- | No log | 2.0 | 214 | 3.6398 | 1.0 |
56
- | No log | 3.0 | 321 | 3.6340 | 1.0 |
57
- | No log | 4.0 | 428 | 3.6451 | 1.0 |
58
- | 3.9157 | 5.0 | 535 | 3.6482 | 1.0 |
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.31.0.dev0
64
- - Pytorch 2.0.0+cu117
65
  - Datasets 2.7.0
66
- - Tokenizers 0.13.2
 
1
  ---
2
+ license: apache-2.0
3
  tags:
 
 
4
  - generated_from_trainer
5
+ datasets:
6
+ - common_voice
7
  metrics:
8
  - wer
9
  model-index:
10
  - name: wav2vec2-common_voice-tr-mms-demo
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: common_voice
17
+ type: common_voice
18
+ config: vi
19
+ split: test
20
+ args: vi
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 1.0
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # wav2vec2-common_voice-tr-mms-demo
31
 
32
+ This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 3.6709
35
  - Wer: 1.0
36
 
37
  ## Model description
 
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 0.001
55
+ - train_batch_size: 4
56
  - eval_batch_size: 8
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Wer |
66
  |:-------------:|:-----:|:----:|:---------------:|:---:|
67
+ | No log | 1.79 | 100 | 3.6345 | 1.0 |
68
+ | No log | 3.57 | 200 | 3.6709 | 1.0 |
 
 
 
69
 
70
 
71
  ### Framework versions
72
 
73
  - Transformers 4.31.0.dev0
74
+ - Pytorch 2.0.1+cu118
75
  - Datasets 2.7.0
76
+ - Tokenizers 0.13.3