kingabzpro commited on
Commit
cf257e4
1 Parent(s): a24ea1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -23,11 +23,11 @@ model-index:
23
  args: ar # Optional. Example: zh-CN
24
  metrics:
25
  - type: wer # Required. Example: wer
26
- value: 43.55 # Required. Example: 20.90
27
- name: Test WER # Optional. Example: Test WER
28
 
29
  - type: cer # Required. Example: wer
30
- value: 16.66 # Required. Example: 20.90
31
  name: Test CER # Optional. Example: Test WER
32
 
33
  ---
@@ -43,7 +43,32 @@ It achieves the following results on the evaluation set:
43
  - Wer: 0.4256
44
  - Cer: 0.1528
45
 
 
 
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ### Training hyperparameters
49
 
 
23
  args: ar # Optional. Example: zh-CN
24
  metrics:
25
  - type: wer # Required. Example: wer
26
+ value: 38.83 # Required. Example: 20.90
27
+ name: Test WER With LM # Optional. Example: Test WER
28
 
29
  - type: cer # Required. Example: wer
30
+ value: 15.33 # Required. Example: 20.90
31
  name: Test CER # Optional. Example: Test WER
32
 
33
  ---
 
43
  - Wer: 0.4256
44
  - Cer: 0.1528
45
 
46
+ #### Evaluation Commands
47
+ 1. To evaluate on `mozilla-foundation/common_voice_7_0` with split `test`
48
 
49
+ ```bash
50
+ python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-300-arabic --dataset mozilla-foundation/common_voice_7_0 --config ur --split test
51
+ ```
52
+
53
+
54
+ ### Inference With LM
55
+
56
+ ```python
57
+ import torch
58
+ from datasets import load_dataset
59
+ from transformers import AutoModelForCTC, AutoProcessor
60
+ import torchaudio.functional as F
61
+ model_id = "kingabzpro/wav2vec2-large-xlsr-300-arabic"
62
+ sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ur", split="test", streaming=True, use_auth_token=True))
63
+ sample = next(sample_iter)
64
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
65
+ model = AutoModelForCTC.from_pretrained(model_id)
66
+ processor = AutoProcessor.from_pretrained(model_id)
67
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
68
+ with torch.no_grad():
69
+ logits = model(input_values).logits
70
+ transcription = processor.batch_decode(logits.numpy()).text
71
+ ```
72
 
73
  ### Training hyperparameters
74