qqpann commited on
Commit
19ba2fd
1 Parent(s): 4faf6ae

Add: readme

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ja
3
+ datasets:
4
+ - common_voice
5
+ metrics:
6
+ - wer
7
+ - cer
8
+ tags:
9
+ - audio
10
+ - automatic-speech-recognition
11
+ - speech
12
+ - xlsr-fine-tuning-week
13
+ license: apache-2.0
14
+ model-index:
15
+ - name: { human_readable_name } #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53`
16
+ results:
17
+ - task:
18
+ name: Speech Recognition
19
+ type: automatic-speech-recognition
20
+ dataset:
21
+ name: Common Voice ja
22
+ type: common_voice
23
+ args: ja
24
+ metrics:
25
+ - name: Test WER
26
+ type: wer
27
+ value: { wer_result_on_test } #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
28
+ ---
29
+
30
+ # Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, _e.g._ French
31
+
32
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, _e.g._ French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice
33
+ When using this model, make sure that your speech input is sampled at 16kHz.
34
+
35
+ ## Usage
36
+
37
+ The model can be used directly (without a language model) as follows:
38
+
39
+ ```python
40
+ import torch
41
+ import torchaudio
42
+ from datasets import load_dataset
43
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
44
+
45
+ test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
46
+
47
+ processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
48
+ model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
49
+
50
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
51
+
52
+ # Preprocessing the datasets.
53
+ # We need to read the aduio files as arrays
54
+ def speech_file_to_array_fn(batch):
55
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
56
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
57
+ return batch
58
+
59
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
60
+ inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
61
+
62
+ with torch.no_grad():
63
+ logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
64
+
65
+ predicted_ids = torch.argmax(logits, dim=-1)
66
+
67
+ print("Prediction:", processor.batch_decode(predicted_ids))
68
+ print("Reference:", test_dataset["sentence"][:2])
69
+ ```
70
+
71
+ ## Evaluation
72
+
73
+ The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, _e.g._ French
74
+
75
+ ```python
76
+ import torch
77
+ import torchaudio
78
+ from datasets import load_dataset, load_metric
79
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
80
+ import re
81
+
82
+ test_dataset = load_dataset("common_voice", "ja", split="test")
83
+ wer = load_metric("wer")
84
+
85
+ processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
86
+ model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200")
87
+ model.to("cuda")
88
+
89
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
90
+ resampler = torchaudio.transforms.Resample(48_000, 16_000)
91
+
92
+ # Preprocessing the datasets.
93
+ # We need to read the aduio files as arrays
94
+ def speech_file_to_array_fn(batch):
95
+ batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
96
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
97
+ batch["speech"] = resampler(speech_array).squeeze().numpy()
98
+ return batch
99
+
100
+ test_dataset = test_dataset.map(speech_file_to_array_fn)
101
+
102
+ # Preprocessing the datasets.
103
+ # We need to read the aduio files as arrays
104
+ def evaluate(batch):
105
+ inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
106
+
107
+ with torch.no_grad():
108
+ logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
109
+
110
+ pred_ids = torch.argmax(logits, dim=-1)
111
+ batch["pred_strings"] = processor.batch_decode(pred_ids)
112
+ return batch
113
+
114
+ result = test_dataset.map(evaluate, batched=True, batch_size=8)
115
+
116
+ print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
117
+ ```
118
+
119
+ **Test Result**: XX.XX %
120
+
121
+ <!-- # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags. -->
122
+
123
+ ## Training
124
+
125
+ The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ...
126
+
127
+ <!-- # TODO: adapt to state all the datasets that were used for training. -->
128
+
129
+ The script used for training can be found [here](...)
130
+
131
+ <!-- # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here. -->