anuragshas commited on
Commit
1577855
1 Parent(s): 384e034

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -3
README.md CHANGED
@@ -6,11 +6,40 @@ tags:
6
  - automatic-speech-recognition
7
  - mozilla-foundation/common_voice_8_0
8
  - generated_from_trainer
 
9
  datasets:
10
- - common_voice
11
  model-index:
12
- - name: ''
13
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -81,3 +110,42 @@ The following hyperparameters were used during training:
81
  - Pytorch 1.10.2+cu102
82
  - Datasets 1.18.2.dev0
83
  - Tokenizers 0.11.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - automatic-speech-recognition
7
  - mozilla-foundation/common_voice_8_0
8
  - generated_from_trainer
9
+ - robust-speech-event
10
  datasets:
11
+ - mozilla-foundation/common_voice_8_0
12
  model-index:
13
+ - name: XLS-R-300M - Latvian
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: Common Voice 8
20
+ type: mozilla-foundation/common_voice_8_0
21
+ args: lv
22
+ metrics:
23
+ - name: Test WER
24
+ type: wer
25
+ value: 9.926
26
+ - name: Test CER
27
+ type: cer
28
+ value: 2.807
29
+ - task:
30
+ name: Automatic Speech Recognition
31
+ type: automatic-speech-recognition
32
+ dataset:
33
+ name: Robust Speech Event - Dev Data
34
+ type: speech-recognition-community-v2/dev_data
35
+ args: lv
36
+ metrics:
37
+ - name: Test WER
38
+ type: wer
39
+ value: 36.110
40
+ - name: Test CER
41
+ type: cer
42
+ value: 14.244
43
  ---
44
 
45
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
110
  - Pytorch 1.10.2+cu102
111
  - Datasets 1.18.2.dev0
112
  - Tokenizers 0.11.0
113
+
114
+ #### Evaluation Commands
115
+ 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
116
+
117
+ ```bash
118
+ python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset mozilla-foundation/common_voice_8_0 --config lv --split test
119
+ ```
120
+
121
+ 2. To evaluate on `speech-recognition-community-v2/dev_data`
122
+
123
+ ```bash
124
+ python eval.py --model_id anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm --dataset speech-recognition-community-v2/dev_data --config lv --split validation --chunk_length_s 5.0 --stride_length_s 1.0
125
+ ```
126
+
127
+ ### Inference With LM
128
+
129
+ ```python
130
+ import torch
131
+ from datasets import load_dataset
132
+ from transformers import AutoModelForCTC, AutoProcessor
133
+ import torchaudio.functional as F
134
+ model_id = "anuragshas/wav2vec2-xls-r-300m-lv-cv8-with-lm"
135
+ sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "lv", split="test", streaming=True, use_auth_token=True))
136
+ sample = next(sample_iter)
137
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
138
+ model = AutoModelForCTC.from_pretrained(model_id)
139
+ processor = AutoProcessor.from_pretrained(model_id)
140
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
141
+ with torch.no_grad():
142
+ logits = model(input_values).logits
143
+ transcription = processor.batch_decode(logits.numpy()).text
144
+ # => ""
145
+ ```
146
+
147
+ ### Eval results on Common Voice 8 "test" (WER):
148
+
149
+ | Without LM | With LM (run `./eval.py`) |
150
+ |---|---|
151
+ | 16.997 | 9.926 |