PereLluis13 commited on
Commit
3b1702c
2 Parent(s): 708e782 d9a5de6

Merge branch 'main' of https://huggingface.co/PereLluis13/wav2vec2-large-xlsr-53-greek into main

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <======================Copy **raw** version from here=========================
2
+ ---
3
+ language: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
4
+ datasets:
5
+ - common_voice #TODO: remove if you did not use the common voice dataset
6
+ - CSS10
7
+ metrics:
8
+ - wer
9
+ tags:
10
+ - audio
11
+ - automatic-speech-recognition
12
+ - speech
13
+ - xlsr-fine-tuning-week
14
+ license: apache-2.0
15
+ model-index:
16
+ - name: Greek XLSR Wav2Vec2 Large 53 - CV + CSS10 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
17
+ results:
18
+ - task:
19
+ name: Speech Recognition
20
+ type: automatic-speech-recognition
21
+ dataset:
22
+ name: Common Voice el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
23
+ type: common_voice
24
+ args: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
25
+ metrics:
26
+ - name: Test WER
27
+ type: wer
28
+ value: 34.75 #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
29
+ ---
30
+
31
+ # Wav2Vec2-Large-XLSR-53-greek #TODO: replace language with your {language}, *e.g.* French
32
+
33
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
34
+ When using this model, make sure that your speech input is sampled at 16kHz.
35
+
36
+ ## Usage
37
+
38
+ The model can be used directly (without a language model) as follows:
39
+
40
+ ```python
41
+ import torch
42
+ import torchaudio
43
+ from datasets import load_dataset
44
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
45
+
46
+ test_dataset = load_dataset("common_voice", "el", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
47
+
48
+ processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
49
+ model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
50
+
51
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
52
+
53
+ # Preprocessing the datasets.
54
+ # We need to read the aduio files as arrays
55
+ def speech_file_to_array_fn(batch):
56
+ \tspeech_array, sampling_rate = torchaudio.load(batch["path"])
57
+ \tbatch["speech"] = resampler(speech_array).squeeze().numpy()
58
+ \treturn batch
59
+
60
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
61
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
62
+
63
+ with torch.no_grad():
64
+ \tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
65
+
66
+ predicted_ids = torch.argmax(logits, dim=-1)
67
+
68
+ print("Prediction:", processor.batch_decode(predicted_ids))
69
+ print("Reference:", test_dataset["sentence"][:2])
70
+ ```
71
+
72
+
73
+ ## Evaluation
74
+
75
+ The model can be evaluated as follows on the greek test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
76
+
77
+
78
+ ```python
79
+ import torch
80
+ import torchaudio
81
+ from datasets import load_dataset, load_metric
82
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
83
+ import re
84
+
85
+ test_dataset = load_dataset("common_voice", "el", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
86
+ wer = load_metric("wer")
87
+
88
+ processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
89
+ model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
90
+ model.to("cuda")
91
+
92
+ chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\'\\�]' # TODO: adapt this list to include all special characters you removed from the data
93
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
94
+
95
+ # Preprocessing the datasets.
96
+ # We need to read the aduio files as arrays
97
+ def speech_file_to_array_fn(batch):
98
+ \tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
99
+ \tspeech_array, sampling_rate = torchaudio.load(batch["path"])
100
+ \tbatch["speech"] = resampler(speech_array).squeeze().numpy()
101
+ \treturn batch
102
+
103
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
104
+
105
+ # Preprocessing the datasets.
106
+ # We need to read the aduio files as arrays
107
+ def evaluate(batch):
108
+ \tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
109
+
110
+ \twith torch.no_grad():
111
+ \t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
112
+
113
+ \tpred_ids = torch.argmax(logits, dim=-1)
114
+ \tbatch["pred_strings"] = processor.batch_decode(pred_ids)
115
+ \treturn batch
116
+
117
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
118
+
119
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
120
+ ```
121
+
122
+ **Test Result**: 34.75 % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
123
+
124
+
125
+ ## Training
126
+
127
+ The Common Voice `train`, `validation`, and CSS10 datasets were used for training, added as `extra` split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function `speech_file_to_array_fn` was changed to: # TODO: adapt to state all the datasets that were used for training.
128
+ ```
129
+ def speech_file_to_array_fn(batch):
130
+ try:
131
+ speech_array, sampling_rate = sf.read(batch["path"] + ".wav")
132
+ except:
133
+ speech_array, sampling_rate = librosa.load(batch["path"], sr = 16000, res_type='zero_order_hold')
134
+ sf.write(batch["path"] + ".wav", speech_array, sampling_rate, subtype='PCM_24')
135
+ batch["speech"] = speech_array
136
+ batch["sampling_rate"] = sampling_rate
137
+ batch["target_text"] = batch["text"]
138
+ return batch
139
+ ```
140
+
141
+ As suggested by Florian Zimmermeister.
142
+
143
+ The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch. # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
144
+
145
+ =======================To here===============================>