1
---
2
language: ka
3
4
datasets:
5
- common_voice 
6
metrics:
7
- wer
8
tags:
9
- audio
10
- automatic-speech-recognition
11
- speech
12
- xlsr-fine-tuning-week
13
license: apache-2.0
14
model-index:
15
- name: Georgian WAV2VEC2 Daytona
16
  results:
17
  - task: 
18
      name: Speech Recognition
19
      type: automatic-speech-recognition
20
    dataset:
21
      name: Common Voice ka
22
      type: common_voice
23
      args: ka
24
    metrics:
25
       - name: Test WER
26
         type: wer
27
         value: 48.34 
28
---
29
30
# Wav2Vec2-Large-XLSR-53-Georgian
31
32
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. 
33
When using this model, make sure that your speech input is sampled at 16kHz.
34
35
## Usage
36
37
The model can be used directly (without a language model) as follows:
38
39
```python
40
import torch
41
import torchaudio
42
from datasets import load_dataset
43
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
44
45
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]") 
46
47
processor = Wav2Vec2Processor.from_pretrained("Temur/wav2vec2-Georgian-Daytona") 
48
model = Wav2Vec2ForCTC.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
49
50
resampler = torchaudio.transforms.Resample(48_000, 16_000)
51
52
# Preprocessing the datasets.
53
# We need to read the aduio files as arrays
54
def speech_file_to_array_fn(batch):
55
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
56
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
57
\\treturn batch
58
59
test_dataset = test_dataset.map(speech_file_to_array_fn)
60
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
61
62
with torch.no_grad():
63
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
64
65
predicted_ids = torch.argmax(logits, dim=-1)
66
67
print("Prediction:", processor.batch_decode(predicted_ids))
68
print("Reference:", test_dataset["sentence"][:2])
69
```
70
71
72
## Evaluation
73
74
The model can be evaluated as follows on the Georgian test data of Common Voice.  
75
76
77
```python
78
import torch
79
import torchaudio
80
from datasets import load_dataset, load_metric
81
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
82
import re
83
84
test_dataset = load_dataset("common_voice", "ka", split="test") 
85
wer = load_metric("wer")
86
87
processor = Wav2Vec2Processor.from_pretrained("Temur/wav2vec2-Georgian-Daytona") 
88
model = Wav2Vec2ForCTC.from_pretrained("Temur/wav2vec2-Georgian-Daytona")
89
model.to("cuda")
90
91
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'  # TODO: adapt this list to include all special characters you removed from the data
92
resampler = torchaudio.transforms.Resample(48_000, 16_000)
93
94
# Preprocessing the datasets.
95
# We need to read the aduio files as arrays
96
def speech_file_to_array_fn(batch):
97
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
98
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
99
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
100
\\treturn batch
101
102
test_dataset = test_dataset.map(speech_file_to_array_fn)
103
104
# Preprocessing the datasets.
105
# We need to read the aduio files as arrays
106
def evaluate(batch):
107
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
108
109
\\twith torch.no_grad():
110
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
111
112
\\tpred_ids = torch.argmax(logits, dim=-1)
113
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
114
\\treturn batch
115
116
result = test_dataset.map(evaluate, batched=True, batch_size=8)
117
118
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
119
```
120
121
**Test Result**: 48.34 % 
122
123
## Training
124
125
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ...  # TODO: adapt to state all the datasets that were used for training.
126
127
The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/FINE_TUNE_XLSR_WAV2VEC2.md)