syedmuhammad commited on
Commit
ab94e8d
1 Parent(s): 71dd1b4

first_release

Browse files
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: facebook/wav2vec2-xls-r-300m
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -15,15 +15,15 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # Wav2Vec2-Urdu-300M
17
 
18
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
19
  It achieves the following results on the evaluation set:
20
  - eval_loss: inf
21
- - eval_wer: 0.6044
22
- - eval_runtime: 235.4174
23
- - eval_samples_per_second: 14.035
24
- - eval_steps_per_second: 1.754
25
- - epoch: 5.17
26
- - step: 4800
27
 
28
  ## Model description
29
 
@@ -43,11 +43,11 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0003
46
- - train_batch_size: 4
47
  - eval_batch_size: 8
48
  - seed: 42
49
  - gradient_accumulation_steps: 2
50
- - total_train_batch_size: 8
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 500
 
1
  ---
2
  license: apache-2.0
3
+ base_model: facebook/wav2vec2-large-xlsr-53
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
15
 
16
  # Wav2Vec2-Urdu-300M
17
 
18
+ This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_13_0 dataset.
19
  It achieves the following results on the evaluation set:
20
  - eval_loss: inf
21
+ - eval_wer: 0.3928
22
+ - eval_runtime: 234.5067
23
+ - eval_samples_per_second: 14.089
24
+ - eval_steps_per_second: 1.761
25
+ - epoch: 13.56
26
+ - step: 8400
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0003
46
+ - train_batch_size: 6
47
  - eval_batch_size: 8
48
  - seed: 42
49
  - gradient_accumulation_steps: 2
50
+ - total_train_batch_size: 12
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_steps: 500
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:44145407c64cfff4be1adad5db1f255a453a773bc541c736784a486214edc6b5
3
  size 1261950980
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f1f359573045800386d975fe5f4899628b42d2d623fdb74f62045f693af05f3
3
  size 1261950980
runs/Nov06_16-13-33_638c50298a65/events.out.tfevents.1699287443.638c50298a65.6792.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e1facc720d58324b06df127381c6478c4635ba7d50517675d493161b0f2ec852
3
- size 10937
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54238a411b685e0865c64b5696e0bde2e69adc92bebd9816c5c34fd16a02dc78
3
+ size 11412