cahya commited on
Commit
1b9a583
1 Parent(s): 61f7ff8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -21
README.md CHANGED
@@ -1,13 +1,7 @@
1
  ---
2
- language:
3
- - tr
4
  license: apache-2.0
5
  tags:
6
- - automatic-speech-recognition
7
- - mozilla-foundation/common_voice_7_0
8
  - generated_from_trainer
9
- datasets:
10
- - common_voice
11
  model-index:
12
  - name: ''
13
  results: []
@@ -18,10 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  #
20
 
21
- This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial-cv](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial-cv) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - TR dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.1409
24
- - Wer: 0.1309
25
 
26
  ## Model description
27
 
@@ -41,23 +32,20 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 0.0003
44
- - train_batch_size: 16
45
- - eval_batch_size: 2
46
  - seed: 42
47
  - gradient_accumulation_steps: 4
48
- - total_train_batch_size: 64
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
- - lr_scheduler_warmup_steps: 2000
52
- - num_epochs: 1.0
53
-
54
- ### Training results
55
-
56
-
57
 
58
  ### Framework versions
59
 
60
  - Transformers 4.17.0.dev0
61
- - Pytorch 1.10.1+cu102
62
  - Datasets 1.18.3
63
- - Tokenizers 0.10.3
 
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
 
 
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: ''
7
  results: []
 
12
 
13
  #
14
 
15
+ This model is a fine-tuned version of [cahya/wav2vec2-base-turkish-artificial](https://huggingface.co/cahya/wav2vec2-base-turkish-artificial) on an unknown dataset.
 
 
 
16
 
17
  ## Model description
18
 
 
32
 
33
  The following hyperparameters were used during training:
34
  - learning_rate: 0.0003
35
+ - train_batch_size: 128
36
+ - eval_batch_size: 8
37
  - seed: 42
38
  - gradient_accumulation_steps: 4
39
+ - total_train_batch_size: 512
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - lr_scheduler_warmup_steps: 100
43
+ - num_epochs: 100.0
44
+ - mixed_precision_training: Native AMP
 
 
 
45
 
46
  ### Framework versions
47
 
48
  - Transformers 4.17.0.dev0
49
+ - Pytorch 1.10.2+cu102
50
  - Datasets 1.18.3
51
+ - Tokenizers 0.11.0