kingabzpro commited on
Commit
56f70dd
1 Parent(s): 617876e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -23,10 +23,10 @@ model-index:
23
  args: pa-IN # Optional. Example: zh-CN
24
  metrics:
25
  - type: wer # Required. Example: wer
26
- value: 39.47 # Required. Example: 20.90
27
- name: Test WER # Optional. Example: Test WER
28
  - type: cer # Required. Example: wer
29
- value: 13.60 # Required. Example: 20.90
30
  name: Test CER # Optional. Example: Test WER
31
 
32
  ---
@@ -42,7 +42,32 @@ It achieves the following results on the evaluation set:
42
  - Wer: 0.4939
43
  - Cer: 0.2238
44
 
 
 
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ### Training hyperparameters
48
 
 
23
  args: pa-IN # Optional. Example: zh-CN
24
  metrics:
25
  - type: wer # Required. Example: wer
26
+ value: 36.02 # Required. Example: 20.90
27
+ name: Test WER With LM # Optional. Example: Test WER
28
  - type: cer # Required. Example: wer
29
+ value: 12.81 With LM # Required. Example: 20.90
30
  name: Test CER # Optional. Example: Test WER
31
 
32
  ---
 
42
  - Wer: 0.4939
43
  - Cer: 0.2238
44
 
45
+ #### Evaluation Commands
46
+ 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
47
 
48
+ ```bash
49
+ python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-53-punjabi --dataset mozilla-foundation/common_voice_8_0 --config pa-IN --split test
50
+ ```
51
+
52
+ ### Inference With LM
53
+
54
+ ```python
55
+ import torch
56
+ from datasets import load_dataset
57
+ from transformers import AutoModelForCTC, AutoProcessor
58
+ import torchaudio.functional as F
59
+ model_id = "kingabzpro/wav2vec2-large-xlsr-53-punjabi"
60
+ sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "pa-IN", split="test", streaming=True, use_auth_token=True))
61
+ sample = next(sample_iter)
62
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
63
+ model = AutoModelForCTC.from_pretrained(model_id)
64
+ processor = AutoProcessor.from_pretrained(model_id)
65
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
66
+ with torch.no_grad():
67
+ logits = model(input_values).logits
68
+ transcription = processor.batch_decode(logits.numpy()).text
69
+
70
+ ```
71
 
72
  ### Training hyperparameters
73