pere commited on
Commit
47cabd6
1 Parent(s): f796d1d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: apache-2.0
 
 
3
  tags:
4
  - automatic-speech-recognition
5
- - NbAiLab/NPSC
6
  - generated_from_trainer
7
  datasets:
8
- - npsc
9
  model-index:
10
  - name: xls-npsc-oh
11
  results: []
@@ -16,10 +18,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # xls-npsc-oh
18
 
19
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NBAILAB/NPSC - 48K_MP3 dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 3.0429
22
- - Wer: 1.0
23
 
24
  ## Model description
25
 
@@ -38,16 +40,16 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 7.5e-05
42
- - train_batch_size: 8
43
- - eval_batch_size: 8
44
  - seed: 42
45
  - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 32
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 500
50
- - num_epochs: 10.0
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
 
1
  ---
2
+ language:
3
+ - sv-SE
4
+ license: cc0-1.0
5
  tags:
6
  - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_7_0
8
  - generated_from_trainer
9
  datasets:
10
+ - common_voice
11
  model-index:
12
  - name: xls-npsc-oh
13
  results: []
 
18
 
19
  # xls-npsc-oh
20
 
21
+ This model is a fine-tuned version of [KBLab/wav2vec2-large-voxrex](https://huggingface.co/KBLab/wav2vec2-large-voxrex) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - SV-SE dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2314
24
+ - Wer: 0.64
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 5e-05
44
+ - train_batch_size: 32
45
+ - eval_batch_size: 32
46
  - seed: 42
47
  - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 128
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_steps: 1000
52
+ - num_epochs: 5.0
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results