Baybars commited on
Commit
3c1c8cf
1 Parent(s): 0f68f26

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -20
README.md CHANGED
@@ -1,9 +1,6 @@
1
  ---
2
- language:
3
- - tr
4
  tags:
5
- - automatic-speech-recognition
6
- - common_voice
7
  - generated_from_trainer
8
  datasets:
9
  - common_voice
@@ -17,10 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  #
19
 
20
- This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - TR dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 59.6880
23
- - Wer: 1.0
24
 
25
  ## Model description
26
 
@@ -39,27 +33,18 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - learning_rate: 0.0003
43
- - train_batch_size: 16
44
  - eval_batch_size: 8
45
  - seed: 42
46
- - gradient_accumulation_steps: 2
47
- - total_train_batch_size: 32
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_steps: 500
51
- - num_epochs: 5.0
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Wer |
57
- |:-------------:|:-----:|:----:|:---------------:|:------:|
58
- | No log | 0.92 | 100 | 149.2460 | 1.0032 |
59
- | No log | 1.83 | 200 | 135.3586 | 1.0 |
60
- | No log | 2.75 | 300 | 114.3390 | 1.0 |
61
- | No log | 3.67 | 400 | 88.3077 | 1.0 |
62
- | 67.8005 | 4.59 | 500 | 64.6167 | 1.0 |
63
 
64
 
65
  ### Framework versions
 
1
  ---
2
+ license: apache-2.0
 
3
  tags:
 
 
4
  - generated_from_trainer
5
  datasets:
6
  - common_voice
 
14
 
15
  #
16
 
17
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
 
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 0.0005
37
+ - train_batch_size: 32
38
  - eval_batch_size: 8
39
  - seed: 42
 
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 500
43
+ - num_epochs: 2.0
44
  - mixed_precision_training: Native AMP
45
 
46
  ### Training results
47
 
 
 
 
 
 
 
 
48
 
49
 
50
  ### Framework versions