sammy786 commited on
Commit
560a9eb
1 Parent(s): d12354f

First Version

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: mn
3
+ datasets:
4
+ - common_voice
5
+ tags:
6
+ - audio
7
+ - automatic-speech-recognition
8
+ - speech
9
+ - xlsr-fine-tuning-week
10
+ license: apache-2.0
11
+ model-index:
12
+ - name: XLSR Wav2Vec2 Mongolian by Salim Shaikh
13
+ results:
14
+ - task:
15
+ name: Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: Common Voice mn
19
+ type: common_voice
20
+ args: {mn}
21
+ metrics:
22
+ - name: Test WER
23
+ type: wer
24
+ value: 41.92
25
+ ---
26
+ # Wav2Vec2-Large-XLSR-53-Mongolian
27
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
28
+ When using this model, make sure that your speech input is sampled at 16kHz.
29
+ ## Usage
30
+ The model can be used directly (without a language model) as follows:
31
+
32
+ ```python
33
+
34
+ import torchaudio
35
+ from datasets import load_dataset, load_metric
36
+ from transformers import (
37
+ Wav2Vec2ForCTC,
38
+ Wav2Vec2Processor,
39
+ )
40
+ import torch
41
+ import re
42
+ import sys
43
+
44
+ model_name = "sammy786/wav2vec2-large-xlsr-mongolian"
45
+ device = "cuda"
46
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]'
47
+
48
+ model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
49
+ processor = Wav2Vec2Processor.from_pretrained(model_name)
50
+
51
+ ds = load_dataset("common_voice", "mn", split="test", data_dir="./cv-corpus-6.1-2020-12-11")
52
+
53
+ resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
54
+
55
+ def map_to_array(batch):
56
+ speech, _ = torchaudio.load(batch["path"])
57
+ batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
58
+ batch["sampling_rate"] = resampler.new_freq
59
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
60
+ return batch
61
+
62
+ ds = ds.map(map_to_array)
63
+
64
+ def map_to_pred(batch):
65
+ features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
66
+ input_values = features.input_values.to(device)
67
+ attention_mask = features.attention_mask.to(device)
68
+ with torch.no_grad():
69
+ logits = model(input_values, attention_mask=attention_mask).logits
70
+ pred_ids = torch.argmax(logits, dim=-1)
71
+ batch["predicted"] = processor.batch_decode(pred_ids)
72
+ batch["target"] = batch["sentence"]
73
+ return batch
74
+
75
+ result = ds.map(map_to_pred, batched=True, batch_size=20, remove_columns=list(ds.features.keys()))
76
+
77
+ wer = load_metric("wer")
78
+ print(wer.compute(predictions=result["predicted"], references=result["target"]))
79
+
80
+ ```
81
+ **Test Result**: 41.92 %