anuragshas commited on
Commit
32ca414
1 Parent(s): 554b2c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -4
README.md CHANGED
@@ -1,18 +1,37 @@
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
5
  datasets:
6
- - common_voice
 
 
7
  model-index:
8
- - name: wav2vec2-large-xls-r-300m-ha-cv8
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # wav2vec2-large-xls-r-300m-ha-cv8
16
 
17
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
18
  It achieves the following results on the evaluation set:
@@ -74,3 +93,37 @@ The following hyperparameters were used during training:
74
  - Pytorch 1.10.0+cu111
75
  - Datasets 1.18.2
76
  - Tokenizers 0.11.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - mt
4
  license: apache-2.0
5
  tags:
6
  - generated_from_trainer
7
+ - robust-speech-event
8
  datasets:
9
+ - mozilla-foundation/common_voice_8_0
10
+ metrics:
11
+ - wer
12
  model-index:
13
+ - name: XLS-R-300M - Hausa
14
+ results:
15
+ - task:
16
+ type: automatic-speech-recognition
17
+ name: Speech Recognition
18
+ dataset:
19
+ type: mozilla-foundation/common_voice_8_0
20
+ name: Common Voice 8
21
+ args: ha
22
+ metrics:
23
+ - type: wer # Required. Example: wer
24
+ value: 36.295 # Required. Example: 20.90
25
+ name: Test WER # Optional. Example: Test WER
26
+ - name: Test CER
27
+ type: cer
28
+ value: 11.073
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
  should probably proofread and complete it, then remove this comment. -->
33
 
34
+ # XLS-R-300M - Hausa
35
 
36
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
37
  It achieves the following results on the evaluation set:
 
93
  - Pytorch 1.10.0+cu111
94
  - Datasets 1.18.2
95
  - Tokenizers 0.11.0
96
+
97
+ #### Evaluation Commands
98
+ 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
99
+
100
+ ```bash
101
+ python eval.py --model_id anuragshas/wav2vec2-large-xls-r-300m-ha-cv8 --dataset mozilla-foundation/common_voice_8_0 --config ha --split test
102
+ ```
103
+
104
+
105
+ ### Inference With LM
106
+
107
+ ```python
108
+ import torch
109
+ from datasets import load_dataset
110
+ from transformers import AutoModelForCTC, AutoProcessor
111
+ import torchaudio.functional as F
112
+ model_id = "anuragshas/wav2vec2-large-xls-r-300m-ha-cv8"
113
+ sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ha", split="test", streaming=True, use_auth_token=True))
114
+ sample = next(sample_iter)
115
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
116
+ model = AutoModelForCTC.from_pretrained(model_id)
117
+ processor = AutoProcessor.from_pretrained(model_id)
118
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
119
+ with torch.no_grad():
120
+ logits = model(input_values).logits
121
+ transcription = processor.batch_decode(logits.numpy()).text
122
+ # => "kakin hade ya ke da kyautar"
123
+ ```
124
+
125
+ ### Eval results on Common Voice 8 "test" (WER):
126
+
127
+ | Without LM | With LM (run `./eval.py`) |
128
+ |---|---|
129
+ | 47.821 | 36.295 |