Ilyes commited on
Commit
d47dcf9
1 Parent(s): 7d4f251

update files

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md CHANGED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: fr
3
+ datasets:
4
+ - common_voice
5
+ tags:
6
+ - audio
7
+ - automatic-speech-recognition
8
+ - speech
9
+ - xlsr-fine-tuning-week
10
+ license: apache-2.0
11
+ model-index:
12
+ - name: wav2vec2-large-xlsr-53-French by Ilyes Rebai
13
+ results:
14
+ - task:
15
+ name: Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: Common Voice
19
+ args: fr
20
+ metrics:
21
+ - name: Test WER
22
+ type: wer
23
+ value: 20.89%
24
+ ---
25
+ ## Evaluation on Common Voice FR Test
26
+ ```python
27
+ import torch
28
+ import torchaudio
29
+ from datasets import load_dataset, load_metric
30
+ from transformers import (
31
+ Wav2Vec2ForCTC,
32
+ Wav2Vec2Processor,
33
+ )
34
+ import re
35
+
36
+ model_name = "Ilyes/wav2vec2-large-xlsr-53-french"
37
+
38
+
39
+
40
+ model = Wav2Vec2ForCTC.from_pretrained(model_name).to('cuda')
41
+ processor = Wav2Vec2Processor.from_pretrained(model_name)
42
+
43
+ ds = load_dataset("common_voice", "fr", split="test", cache_dir="./data/fr")
44
+
45
+
46
+
47
+
48
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\‘\’\’\’\‘\…\·\!\ǃ\?\«\‹\»\›“\”\\ʿ\ʾ\„\∞\\|\.\,\;\:\*\—\–\─\―\_\/\:\ː\;\,\=\«\»\→]'
49
+ def map_to_array(batch):
50
+ speech, _ = torchaudio.load(batch["path"])
51
+ batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
52
+ batch["sampling_rate"] = resampler.new_freq
53
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
54
+ return batch
55
+
56
+ ds = ds.map(map_to_array)
57
+
58
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
59
+ def map_to_pred(batch):
60
+ features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
61
+ input_values = features.input_values.to(device)
62
+ attention_mask = features.attention_mask.to(device)
63
+ with torch.no_grad():
64
+ logits = model(input_values, attention_mask=attention_mask).logits
65
+ pred_ids = torch.argmax(logits, dim=-1)
66
+ batch["predicted"] = processor.batch_decode(pred_ids)
67
+ batch["target"] = batch["sentence"]
68
+ return batch
69
+
70
+ result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
71
+ wer = load_metric("wer")
72
+ print(wer.compute(predictions=result["predicted"], references=result["target"]))
73
+ ```
74
+
75
+ ## Training
76
+
77
+ 6% of the Common Voice `train`, `validation` datasets were used for training.
78
+
79
+ ## Testing
80
+
81
+ All the Common Voice `Test` dataset (15763 files) were used for testing.
82
+ Results:
83
+ WER=20.89%
84
+ SER=77.56%
85
+