kingabzpro commited on
Commit
a43a953
1 Parent(s): e3c01ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -23,12 +23,12 @@ model-index:
23
  args: ga-IE # Optional. Example: zh-CN
24
  metrics:
25
  - type: wer # Required. Example: wer
26
- value: 42.36 # Required. Example: 20.90
27
- name: Test WER # Optional. Example: Test WER
28
 
29
  - type: cer # Required. Example: wer
30
- value: 17.68 # Required. Example: 20.90
31
- name: Test CER # Optional. Example: Test WER
32
 
33
  ---
34
 
@@ -43,7 +43,32 @@ It achieves the following results on the evaluation set:
43
  - Wer: 0.4236
44
  - Cer: 0.1768
45
 
 
 
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ### Training hyperparameters
49
 
 
23
  args: ga-IE # Optional. Example: zh-CN
24
  metrics:
25
  - type: wer # Required. Example: wer
26
+ value: 38.45 # Required. Example: 20.90
27
+ name: Test WER With LM # Optional. Example: Test WER
28
 
29
  - type: cer # Required. Example: wer
30
+ value: 16.52 # Required. Example: 20.90
31
+ name: Test CER With LM # Optional. Example: Test WER
32
 
33
  ---
34
 
 
43
  - Wer: 0.4236
44
  - Cer: 0.1768
45
 
46
+ #### Evaluation Commands
47
+ 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
48
 
49
+ ```bash
50
+ python eval.py --model_id kingabzpro/wav2vec2-large-xls-r-1b-Irish --dataset mozilla-foundation/common_voice_8_0 --config ga-IE --split test
51
+ ```
52
+
53
+ ### Inference With LM
54
+
55
+ ```python
56
+ import torch
57
+ from datasets import load_dataset
58
+ from transformers import AutoModelForCTC, AutoProcessor
59
+ import torchaudio.functional as F
60
+ model_id = "kingabzpro/wav2vec2-large-xls-r-1b-Irish"
61
+ sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ga-IE", split="test", streaming=True, use_auth_token=True))
62
+ sample = next(sample_iter)
63
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
64
+ model = AutoModelForCTC.from_pretrained(model_id)
65
+ processor = AutoProcessor.from_pretrained(model_id)
66
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
67
+ with torch.no_grad():
68
+ logits = model(input_values).logits
69
+ transcription = processor.batch_decode(logits.numpy()).text
70
+
71
+ ```
72
 
73
  ### Training hyperparameters
74