othrif commited on
Commit
cacc687
1 Parent(s): 0e90d89

adding moroccan dialect

Browse files
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ary
3
+ datasets:
4
+ - mgb5
5
+ metrics:
6
+ - wer
7
+ tags:
8
+ - audio
9
+ - automatic-speech-recognition
10
+ - speech
11
+ - xlsr-fine-tuning-week
12
+ license: apache-2.0
13
+ model-index:
14
+ - name: XLSR Wav2Vec2 Moroccan Arabic dialect by Othmane Rifki
15
+ results:
16
+ - task:
17
+ name: Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: MGB5 from ELDA and https://arabicspeech.org/
21
+ type: ELDA and https://arabicspeech.org/
22
+ args: ary
23
+ metrics:
24
+ - name: Test WER
25
+ type: wer
26
+ value: 44.51
27
+ ---
28
+
29
+ # Wav2Vec2-Large-XLSR-53-Moroccan
30
+
31
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [MGB5 Moroccan Arabic](http://www.islrn.org/resources/938-639-614-524-5/) kindly provided by [ELDA](http://www.elra.info/en/about/elda/) and [ArabicSpeech](https://arabicspeech.org/mgb5/).
32
+
33
+ In order to have access to MGB5, please request it from ELDA.
34
+
35
+ When using this model, make sure that your speech input is sampled at 16kHz.
36
+
37
+ ## Usage
38
+
39
+ The model can be used directly (without a language model) as follows:
40
+
41
+ ```python
42
+ import torch
43
+ import torchaudio
44
+ from datasets import load_dataset
45
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
46
+
47
+ test_dataset = load_dataset("", split="test[:2%]")
48
+
49
+ processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
50
+ model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
51
+
52
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
53
+
54
+ # Preprocessing the datasets.
55
+ # We need to read the audio files as arrays
56
+ def speech_file_to_array_fn(batch):
57
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
58
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
59
+ return batch
60
+
61
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
62
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
63
+
64
+ with torch.no_grad():
65
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
66
+
67
+ predicted_ids = torch.argmax(logits, dim=-1)
68
+
69
+ print("Prediction:", processor.batch_decode(predicted_ids))
70
+ print("Reference:", test_dataset["sentence"][:2])
71
+ ```
72
+
73
+
74
+ ## Evaluation
75
+
76
+ The model can be evaluated as follows on the Arabic test data of Common Voice.
77
+
78
+
79
+ ```python
80
+ import torch
81
+ import torchaudio
82
+ from datasets import load_dataset, load_metric
83
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
84
+ import re
85
+
86
+ test_dataset = load_dataset("common_voice", "ar", split="test")
87
+ wer = load_metric("wer")
88
+
89
+ processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
90
+ model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic")
91
+ model.to("cuda")
92
+
93
+ chars_to_ignore_regex = '[\\\\\\\\؛\\\\\\\\—\\\\\\\\_get\\\\\\\\«\\\\\\\\»\\\\\\\\ـ\\\\\\\\ـ\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\“\\\\\\\\%\\\\\\\\‘\\\\\\\\”\\\\\\\\�\\\\\\\\#\\\\\\\\،\\\\\\\\☭,\\\\\\\\؟]'
94
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
95
+
96
+ # Preprocessing the datasets.
97
+ # We need to read the audio files as arrays
98
+ def speech_file_to_array_fn(batch):
99
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
100
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
101
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
102
+ return batch
103
+
104
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
105
+
106
+ # Preprocessing the datasets.
107
+ # We need to read the audio files as arrays
108
+ def evaluate(batch):
109
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
110
+
111
+ with torch.no_grad():
112
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
113
+
114
+ pred_ids = torch.argmax(logits, dim=-1)
115
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
116
+ return batch
117
+
118
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
119
+
120
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
121
+ ```
122
+
123
+ **Test Result**: 44.51
124
+
125
+
126
+ ## Training
127
+
128
+ The Common Voice `train`, `validation` datasets were used for training.
129
+
130
+ The script used for training can be found [here](https://huggingface.co/othrif/wav2vec2-large-xlsr-arabic/tree/main)
config.json ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "facebook/wav2vec2-large-xlsr-53",
3
+ "activation_dropout": 0.055,
4
+ "apply_spec_augment": true,
5
+ "architectures": [
6
+ "Wav2Vec2ForCTC"
7
+ ],
8
+ "attention_dropout": 0.094,
9
+ "bos_token_id": 1,
10
+ "conv_bias": true,
11
+ "conv_dim": [
12
+ 512,
13
+ 512,
14
+ 512,
15
+ 512,
16
+ 512,
17
+ 512,
18
+ 512
19
+ ],
20
+ "conv_kernel": [
21
+ 10,
22
+ 3,
23
+ 3,
24
+ 3,
25
+ 3,
26
+ 2,
27
+ 2
28
+ ],
29
+ "conv_stride": [
30
+ 5,
31
+ 2,
32
+ 2,
33
+ 2,
34
+ 2,
35
+ 2,
36
+ 2
37
+ ],
38
+ "ctc_loss_reduction": "mean",
39
+ "ctc_zero_infinity": false,
40
+ "do_stable_layer_norm": true,
41
+ "eos_token_id": 2,
42
+ "feat_extract_activation": "gelu",
43
+ "feat_extract_dropout": 0.0,
44
+ "feat_extract_norm": "layer",
45
+ "feat_proj_dropout": 0.04,
46
+ "final_dropout": 0.0,
47
+ "gradient_checkpointing": true,
48
+ "hidden_act": "gelu",
49
+ "hidden_dropout": 0.047,
50
+ "hidden_size": 1024,
51
+ "initializer_range": 0.02,
52
+ "intermediate_size": 4096,
53
+ "layer_norm_eps": 1e-05,
54
+ "layerdrop": 0.041,
55
+ "mask_channel_length": 10,
56
+ "mask_channel_min_space": 1,
57
+ "mask_channel_other": 0.0,
58
+ "mask_channel_prob": 0.0,
59
+ "mask_channel_selection": "static",
60
+ "mask_feature_length": 10,
61
+ "mask_feature_prob": 0.0,
62
+ "mask_time_length": 10,
63
+ "mask_time_min_space": 1,
64
+ "mask_time_other": 0.0,
65
+ "mask_time_prob": 0.082,
66
+ "mask_time_selection": "static",
67
+ "model_type": "wav2vec2",
68
+ "num_attention_heads": 16,
69
+ "num_conv_pos_embedding_groups": 16,
70
+ "num_conv_pos_embeddings": 128,
71
+ "num_feat_extract_layers": 7,
72
+ "num_hidden_layers": 24,
73
+ "pad_token_id": 41,
74
+ "transformers_version": "4.4.0",
75
+ "vocab_size": 42
76
+ }
preprocessor_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_size": 1,
4
+ "padding_side": "right",
5
+ "padding_value": 0.0,
6
+ "return_attention_mask": true,
7
+ "sampling_rate": 16000
8
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "[UNK]", "pad_token": "[PAD]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|"}
vocab.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"\u062a": 0, "\u0636": 1, "\u0648": 2, "\u062c": 3, "\u0638": 4, "\u0622": 5, "\u0630": 6, "\u0639": 7, "\u0634": 8, "\u0644": 9, "\u0632": 10, "\u0621": 11, "\u0623": 12, "\u0637": 13, "\u0624": 14, "\u0642": 15, "\u062e": 16, "\u0628": 17, "\u064a": 18, "\u0645": 19, "\u0626": 20, "\u062b": 21, "\u0647": 22, "\u0643": 23, "\u06a9": 24, "\u062f": 25, "\u0631": 26, "\u062d": 27, "\u0646": 28, "\u0633": 29, "\u0625": 30, "\u06cc": 31, "\u0641": 32, "\u0629": 33, "\u0635": 34, "\u0627": 35, "\u0649": 36, "\u063a": 37, "\u0670": 38, "|": 39, "[UNK]": 40, "[PAD]": 41}