skylord commited on
Commit
7040dcf
1 Parent(s): 157cb0a

Added readme

Browse files
Files changed (2) hide show
  1. .ipynb_checkpoints/README-checkpoint.md +131 -0
  2. README.md +131 -0
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: el
3
+ datasets:
4
+ - common_voice
5
+ metrics:
6
+ - wer
7
+ tags:
8
+ - audio
9
+ - automatic-speech-recognition
10
+ - speech
11
+ - xlsr-fine-tuning-week
12
+ license: apache-2.0
13
+ model-index:
14
+ - name: Greek XLSR Wav2Vec2 Large 53
15
+ results:
16
+ - task:
17
+ name: Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice el
21
+ type: common_voice
22
+ args: el
23
+ metrics:
24
+ - name: Test WER
25
+ type: wer
26
+ value: 34.006258
27
+ ---
28
+
29
+ # Wav2Vec2-Large-XLSR-53-Greek
30
+
31
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice),
32
+ The Greek CV data has a majority of male voices. To balance it synthesised female voices were created using the approach discussed here [slack](https://huggingface.slack.com/archives/C01QZ90Q83Z/p1616741140114800)
33
+ The text from the common-voice dataset was used to synthesize vocies of female speakers using [Googe's TTS Standard Voice model](https://cloud.google.com/text-to-speech)
34
+
35
+ Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Greek CommonVoice :: 5 epochs >> 56.25% WER
36
+ Resuming from checkpoints trained for another 15 epochs >> 34.00%
37
+ Added synthesised female voices trained for 12 epochs >> 34.00% (no change)
38
+
39
+ When using this model, make sure that your speech input is sampled at 16kHz.
40
+
41
+ ## Usage
42
+
43
+ The model can be used directly (without a language model) as follows:
44
+
45
+ ```python
46
+ import torch
47
+ import torchaudio
48
+ from datasets import load_dataset
49
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
50
+ test_dataset = load_dataset("common_voice", "el", split="test[:2%]")
51
+
52
+ processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
53
+ model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
54
+
55
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
56
+
57
+ # Preprocessing the datasets.
58
+ # We need to read the aduio files as arrays
59
+ def speech_file_to_array_fn(batch):
60
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
61
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
62
+ return batch
63
+
64
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
65
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
66
+
67
+ with torch.no_grad():
68
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
69
+
70
+ predicted_ids = torch.argmax(logits, dim=-1)
71
+ print("Prediction:", processor.batch_decode(predicted_ids))
72
+ print("Reference:", test_dataset["sentence"][:2])
73
+ ```
74
+
75
+
76
+ ## Evaluation
77
+
78
+ The model can be evaluated as follows on the Greek test data of Common Voice.
79
+
80
+
81
+ ```python
82
+ import torch
83
+ import torchaudio
84
+ from datasets import load_dataset, load_metric
85
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
86
+ import re
87
+
88
+ test_dataset = load_dataset("common_voice", "el", split="test")
89
+ wer = load_metric("wer")
90
+
91
+ processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
92
+ model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
93
+ model.to("cuda")
94
+
95
+ chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
96
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
97
+
98
+ # Preprocessing the datasets.
99
+ # We need to read the aduio files as arrays
100
+
101
+ def speech_file_to_array_fn(batch):
102
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
103
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
104
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
105
+ return batch
106
+
107
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
108
+
109
+ # Preprocessing the datasets.
110
+ # We need to read the aduio files as arrays
111
+
112
+ def evaluate(batch):
113
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
114
+ with torch.no_grad():
115
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
116
+ pred_ids = torch.argmax(logits, dim=-1)
117
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
118
+ return batch
119
+
120
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
121
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
122
+ ```
123
+
124
+ **Test Result**: 34.006258 %
125
+
126
+
127
+ ## Training
128
+
129
+ The Common Voice `train`, `validation`, datasets were used for training as well as
130
+
131
+ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: el
3
+ datasets:
4
+ - common_voice
5
+ metrics:
6
+ - wer
7
+ tags:
8
+ - audio
9
+ - automatic-speech-recognition
10
+ - speech
11
+ - xlsr-fine-tuning-week
12
+ license: apache-2.0
13
+ model-index:
14
+ - name: Greek XLSR Wav2Vec2 Large 53
15
+ results:
16
+ - task:
17
+ name: Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice el
21
+ type: common_voice
22
+ args: el
23
+ metrics:
24
+ - name: Test WER
25
+ type: wer
26
+ value: 34.006258
27
+ ---
28
+
29
+ # Wav2Vec2-Large-XLSR-53-Greek
30
+
31
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Greek using the [Common Voice](https://huggingface.co/datasets/common_voice),
32
+ The Greek CV data has a majority of male voices. To balance it synthesised female voices were created using the approach discussed here [slack](https://huggingface.slack.com/archives/C01QZ90Q83Z/p1616741140114800)
33
+ The text from the common-voice dataset was used to synthesize vocies of female speakers using [Googe's TTS Standard Voice model](https://cloud.google.com/text-to-speech)
34
+
35
+ Fine-tuned on facebook/wav2vec2-large-xlsr-53 using Greek CommonVoice :: 5 epochs >> 56.25% WER
36
+ Resuming from checkpoints trained for another 15 epochs >> 34.00%
37
+ Added synthesised female voices trained for 12 epochs >> 34.00% (no change)
38
+
39
+ When using this model, make sure that your speech input is sampled at 16kHz.
40
+
41
+ ## Usage
42
+
43
+ The model can be used directly (without a language model) as follows:
44
+
45
+ ```python
46
+ import torch
47
+ import torchaudio
48
+ from datasets import load_dataset
49
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
50
+ test_dataset = load_dataset("common_voice", "el", split="test[:2%]")
51
+
52
+ processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
53
+ model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
54
+
55
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
56
+
57
+ # Preprocessing the datasets.
58
+ # We need to read the aduio files as arrays
59
+ def speech_file_to_array_fn(batch):
60
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
61
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
62
+ return batch
63
+
64
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
65
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
66
+
67
+ with torch.no_grad():
68
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
69
+
70
+ predicted_ids = torch.argmax(logits, dim=-1)
71
+ print("Prediction:", processor.batch_decode(predicted_ids))
72
+ print("Reference:", test_dataset["sentence"][:2])
73
+ ```
74
+
75
+
76
+ ## Evaluation
77
+
78
+ The model can be evaluated as follows on the Greek test data of Common Voice.
79
+
80
+
81
+ ```python
82
+ import torch
83
+ import torchaudio
84
+ from datasets import load_dataset, load_metric
85
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
86
+ import re
87
+
88
+ test_dataset = load_dataset("common_voice", "el", split="test")
89
+ wer = load_metric("wer")
90
+
91
+ processor = Wav2Vec2Processor.from_pretrained("skylord/greek_lsr_1")
92
+ model = Wav2Vec2ForCTC.from_pretrained("skylord/greek_lsr_1")
93
+ model.to("cuda")
94
+
95
+ chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“]'
96
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
97
+
98
+ # Preprocessing the datasets.
99
+ # We need to read the aduio files as arrays
100
+
101
+ def speech_file_to_array_fn(batch):
102
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
103
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
104
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
105
+ return batch
106
+
107
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
108
+
109
+ # Preprocessing the datasets.
110
+ # We need to read the aduio files as arrays
111
+
112
+ def evaluate(batch):
113
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
114
+ with torch.no_grad():
115
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
116
+ pred_ids = torch.argmax(logits, dim=-1)
117
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
118
+ return batch
119
+
120
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
121
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
122
+ ```
123
+
124
+ **Test Result**: 34.006258 %
125
+
126
+
127
+ ## Training
128
+
129
+ The Common Voice `train`, `validation`, datasets were used for training as well as
130
+
131
+ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.