tyoc213 commited on
Commit
c3b675f
1 Parent(s): 8baca08

wer 50.95, +less60wer.ipynb

Browse files
Files changed (5) hide show
  1. README.md +98 -1
  2. config.json +2 -2
  3. less60wer.ipynb +0 -0
  4. pytorch_model.bin +2 -2
  5. vocab.json +1 -1
README.md CHANGED
@@ -23,4 +23,101 @@ model-index:
23
  value: 69.11
24
  ---
25
 
26
- First full try on the new dataset. Updates to come.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  value: 69.11
24
  ---
25
 
26
+ # Wav2Vec2-Large-XLSR-53-ncj/nah
27
+
28
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Nahuatl specifically of the Nort of Puebla (ncj) using a derivate of [SLR92](https://www.openslr.org/92/), and some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice).
29
+
30
+ ## Usage
31
+
32
+ The model can be used directly (without a language model) as follows:
33
+
34
+ ```python
35
+ import torch
36
+ import torchaudio
37
+ from datasets import load_dataset
38
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
39
+
40
+ test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") # TODO: publish nahuatl_slr92_by_sentence
41
+
42
+ processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
43
+ model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
44
+
45
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
46
+
47
+ # Preprocessing the datasets.
48
+ # We need to read the aduio files as arrays
49
+ def speech_file_to_array_fn(batch):
50
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
51
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
52
+ return batch
53
+
54
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
55
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
56
+
57
+ with torch.no_grad():
58
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
59
+
60
+ predicted_ids = torch.argmax(logits, dim=-1)
61
+
62
+ print("Prediction:", processor.batch_decode(predicted_ids))
63
+ print("Reference:", test_dataset["sentence"][:2])
64
+ ```
65
+
66
+
67
+ ## Evaluation
68
+
69
+ The model can be evaluated as follows on the Nahuatl specifically of the Nort of Puebla (ncj) test data of Common Voice.
70
+
71
+
72
+ ```python
73
+ import torch
74
+ import torchaudio
75
+ from datasets import load_dataset, load_metric
76
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
77
+ import re
78
+
79
+ test_dataset = load_dataset("common_voice", "{lang_id}", split="test") # TODO: publish nahuatl_slr92_by_sentence
80
+ wer = load_metric("wer")
81
+
82
+ processor = Wav2Vec2Processor.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
83
+ model = Wav2Vec2ForCTC.from_pretrained("tyoc213/wav2vec2-large-xlsr-nahuatl")
84
+ model.to("cuda")
85
+
86
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\"\“\%\‘\”\�\(\)\-]'
87
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
88
+
89
+ # Preprocessing the datasets.
90
+ # We need to read the aduio files as arrays
91
+ def speech_file_to_array_fn(batch):
92
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
93
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
94
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
95
+ return batch
96
+
97
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
98
+
99
+ # Preprocessing the datasets.
100
+ # We need to read the aduio files as arrays
101
+ def evaluate(batch):
102
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
103
+
104
+ with torch.no_grad():
105
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
106
+
107
+ pred_ids = torch.argmax(logits, dim=-1)
108
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
109
+ return batch
110
+
111
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
112
+
113
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
114
+ ```
115
+
116
+ **Test Result**: 50.95 %
117
+
118
+
119
+ ## Training
120
+
121
+ A derivate of [SLR92](https://www.openslr.org/92/) to be published soon.And some samples of `es` and `de` datasets from [Common Voice](https://huggingface.co/datasets/common_voice)
122
+
123
+ The script used for training can be found [less60wer.ipynb](./less60wer.ipynb)
config.json CHANGED
@@ -70,7 +70,7 @@
70
  "num_conv_pos_embeddings": 128,
71
  "num_feat_extract_layers": 7,
72
  "num_hidden_layers": 24,
73
- "pad_token_id": 42,
74
  "transformers_version": "4.5.0.dev0",
75
- "vocab_size": 43
76
  }
 
70
  "num_conv_pos_embeddings": 128,
71
  "num_feat_extract_layers": 7,
72
  "num_hidden_layers": 24,
73
+ "pad_token_id": 44,
74
  "transformers_version": "4.5.0.dev0",
75
+ "vocab_size": 45
76
  }
less60wer.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:41e78fb1cf5485a524c97d2b63d3669629a1a5a037282d4d810f3c294d402a29
3
- size 1262110103
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f05c8cadb48e7e8aa1177ab3193ad36b7871134416dbaccd5c773220cf44dcc2
3
+ size 1262118359
vocab.json CHANGED
@@ -1 +1 @@
1
- {"k": 0, "ú": 1, "¿": 2, "v": 3, "]": 4, "x": 5, "c": 6, "{": 7, "f": 8, "d": 9, "i": 10, "t": 11, "j": 12, "a": 13, "y": 14, "e": 15, "é": 16, "z": 17, "'": 18, "[": 19, "u": 20, "*": 21, "¡": 22, "r": 23, "ñ": 24, "q": 25, "á": 26, "s": 27, "b": 29, "´": 30, "m": 31, "ó": 32, "l": 33, "p": 34, "í": 35, "o": 36, "w": 37, "g": 38, "h": 39, "n": 40, "|": 28, "[UNK]": 41, "[PAD]": 42}
 
1
+ {"x": 0, "v": 1, "]": 2, "í": 3, ":": 4, "k": 5, "y": 6, "ö": 7, "'": 8, "h": 9, "¿": 11, "ñ": 12, "n": 13, "ü": 14, "ä": 15, "t": 16, "m": 17, "s": 18, "g": 19, "á": 20, "z": 21, "o": 22, "w": 23, "[": 24, "r": 25, "b": 26, "ß": 27, "d": 28, "ó": 29, "i": 30, "e": 31, "": 32, "ú": 33, "c": 34, "f": 35, "p": 36, "a": 37, "l": 38, "q": 39, "j": 40, "u": 41, "é": 42, "|": 10, "[UNK]": 43, "[PAD]": 44}