mbien commited on
Commit
75c50a6
1 Parent(s): c122b80

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <======================Copy **raw** version from here=========================
2
+ ---
3
+ language: {lang_id} #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
4
+ datasets:
5
+ - common_voice #TODO: remove if you did not use the common voice dataset
6
+ - TODO: add more datasets if you have used additional datasets. Make sure to use the exact same
7
+ dataset name as the one found [here](https://huggingface.co/datasets). If the dataset can not be found in the official datasets, just give it a new name
8
+ metrics:
9
+ - wer
10
+ tags:
11
+ - audio
12
+ - automatic-speech-recognition
13
+ - speech
14
+ - xlsr-fine-tuning-week
15
+ license: apache-2.0
16
+ model-index:
17
+ - name: {model_id} #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
18
+ results:
19
+ - task:
20
+ name: Speech Recognition
21
+ type: automatic-speech-recognition
22
+ dataset:
23
+ name: Common Voice {lang_id} #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
24
+ type: common_voice
25
+ args: {lang_id} #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
26
+ metrics:
27
+ - name: Test WER
28
+ type: wer
29
+ value: {wer_result_on_test} #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
30
+ ---
31
+
32
+ # Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, *e.g.* French
33
+
34
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, *e.g.* French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
35
+ When using this model, make sure that your speech input is sampled at 16kHz.
36
+
37
+ ## Usage
38
+
39
+ The model can be used directly (without a language model) as follows:
40
+
41
+ ```python
42
+ import torch
43
+ import torchaudio
44
+ from datasets import load_dataset
45
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
46
+
47
+ test_dataset = load_dataset("common_voice", "{lang_id}", split="test[:2%]") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
48
+
49
+ processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
50
+ model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
51
+
52
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
53
+
54
+ # Preprocessing the datasets.
55
+ # We need to read the aduio files as arrays
56
+ def speech_file_to_array_fn(batch):
57
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
58
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
59
+ return batch
60
+
61
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
62
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
63
+
64
+ with torch.no_grad():
65
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
66
+
67
+ predicted_ids = torch.argmax(logits, dim=-1)
68
+
69
+ print("Prediction:", processor.batch_decode(predicted_ids))
70
+ print("Reference:", test_dataset["sentence"][:2])
71
+ ```
72
+
73
+
74
+ ## Evaluation
75
+
76
+ The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, *e.g.* French
77
+
78
+
79
+ ```python
80
+ import torch
81
+ import torchaudio
82
+ from datasets import load_dataset, load_metric
83
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
84
+ import re
85
+
86
+ test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
87
+ wer = load_metric("wer")
88
+
89
+ processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
90
+ model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
91
+ model.to("cuda")
92
+
93
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
94
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
95
+
96
+ # Preprocessing the datasets.
97
+ # We need to read the aduio files as arrays
98
+ def speech_file_to_array_fn(batch):
99
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
100
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
101
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
102
+ return batch
103
+
104
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
105
+
106
+ # Preprocessing the datasets.
107
+ # We need to read the aduio files as arrays
108
+ def evaluate(batch):
109
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
110
+
111
+ with torch.no_grad():
112
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
113
+
114
+ pred_ids = torch.argmax(logits, dim=-1)
115
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
116
+ return batch
117
+
118
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
119
+
120
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
121
+ ```
122
+
123
+ **Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
124
+
125
+
126
+ ## Training
127
+
128
+ The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
129
+
130
+ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
131
+
132
+ =======================To here===============================>