jonatasgrosman commited on
Commit
5e9d2aa
1 Parent(s): 9eedfea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -10
README.md CHANGED
@@ -132,22 +132,33 @@ test_dataset = test_dataset.map(speech_file_to_array_fn)
132
  # Preprocessing the datasets.
133
  # We need to read the audio files as arrays
134
  def evaluate(batch):
135
- inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
136
 
137
- with torch.no_grad():
138
- logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
139
 
140
- pred_ids = torch.argmax(logits, dim=-1)
141
- batch["pred_strings"] = processor.batch_decode(pred_ids)
142
- return batch
143
 
144
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
145
 
146
- print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"], chunk_size=1000)))
147
- print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"], chunk_size=1000)))
 
 
 
148
  ```
149
 
150
  **Test Result**:
151
 
152
- - WER: 13.60%
153
- - CER: 4.45%
 
 
 
 
 
 
 
 
 
132
  # Preprocessing the datasets.
133
  # We need to read the audio files as arrays
134
  def evaluate(batch):
135
+ \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
136
 
137
+ \twith torch.no_grad():
138
+ \t\tlogits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits
139
 
140
+ \tpred_ids = torch.argmax(logits, dim=-1)
141
+ \tbatch["pred_strings"] = processor.batch_decode(pred_ids)
142
+ \treturn batch
143
 
144
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
145
 
146
+ predictions = [x.upper() for x in result["pred_strings"]]
147
+ references = [x.upper() for x in result["sentence"]]
148
+
149
+ print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
150
+ print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}")
151
  ```
152
 
153
  **Test Result**:
154
 
155
+ My model may report better scores than others because of some specificity of my evaluation script, so I ran the same evaluation script on other models (on 2021-04-21) to make a fairer comparison.
156
+
157
+ | Model | WER | CER |
158
+ | ------------- | ------------- | ------------- |
159
+ | jonatasgrosman/wav2vec2-large-xlsr-53-dutch | **13.60%** | **4.45%** |
160
+ | wietsedv/wav2vec2-large-xlsr-53-dutch | 16.78% | 5.60% |
161
+ | facebook/wav2vec2-large-xlsr-53-dutch | 20.97% | 7.24% |
162
+ | nithinholla/wav2vec2-large-xlsr-53-dutch | 21.39% | 7.29% |
163
+ | MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Dutch | 25.89% | 9.12% |
164
+ | simonsr/wav2vec2-large-xlsr-dutch | 38.34% | 13.29% |