aapot commited on
Commit
a13f7fa
1 Parent(s): 232c9d4

Update readme

Browse files
Files changed (2) hide show
  1. README.md +77 -20
  2. run-finnish-asr-models.ipynb +1 -0
README.md CHANGED
@@ -32,41 +32,64 @@ model-index:
32
  value: 1.2
33
  ---
34
 
35
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
36
- should probably proofread and complete it, then remove this comment. -->
37
 
38
- # wav2vec2-xlsr-1b-finnish-lm
 
39
 
40
- **Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [aapot/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm-v2)
41
 
42
- This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data.
43
- It achieves the following results on the Common Voice 7 test set together with language model (Finnish KenLM):
44
- - Wer: 5.65
45
- - Cer: 1.20
46
 
47
  ## Model description
48
 
49
- TODO
 
 
 
 
50
 
51
  ## Intended uses & limitations
52
 
53
- TODO
 
 
 
 
 
 
54
 
55
- ## Training and evaluation data
 
 
 
 
 
 
56
 
57
  This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
58
 
59
- | Dataset | Hours | % of total hours |
60
- |:------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
61
- | [Common Voice 7.0 Finnish train+evaluation+other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
62
- | [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
63
- | [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
64
- | [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
65
- | [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
66
- | [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
 
 
67
 
68
  ## Training procedure
69
 
 
 
 
 
 
 
70
  ### Training hyperparameters
71
 
72
  The following hyperparameters were used during training:
@@ -80,6 +103,15 @@ The following hyperparameters were used during training:
80
  - num_epochs: 5
81
  - mixed_precision_training: Native AMP
82
 
 
 
 
 
 
 
 
 
 
83
  ### Training results
84
 
85
  | Training Loss | Epoch | Step | Validation Loss | Wer |
@@ -117,5 +149,30 @@ The following hyperparameters were used during training:
117
 
118
  - Transformers 4.17.0.dev0
119
  - Pytorch 1.10.2+cu102
120
- - Datasets 1.18.2.dev0
121
  - Tokenizers 0.11.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  value: 1.2
33
  ---
34
 
35
+ # Wav2Vec2 XLS-R for Finnish ASR
 
36
 
37
+ This acoustic model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) for Finnish ASR. The model has been fine-tuned with 259.57 hours of Finnish transcribed speech data. Wav2Vec2 XLS-R was introduced in
38
+ [this paper](https://arxiv.org/abs/2111.09296) and first released at [this page](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#wav2vec-20).
39
 
40
+ This repository also includes Finnish KenLM language model used in the decoding phase with the acoustic model.
41
 
42
+ **Note**: this model is exactly the same as the [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm) model so this model has just been copied/moved to the `Finnish-NLP` Hugging Face organization.
43
+
44
+ **Note**: there is a better V2 version of this model which has been fine-tuned longer with 16 hours of more data: [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2)
 
45
 
46
  ## Model description
47
 
48
+ Wav2Vec2 XLS-R is Facebook AI's large-scale multilingual pretrained model for speech. It is pretrained on 436k hours of unlabeled speech, including VoxPopuli, MLS, CommonVoice, BABEL, and VoxLingua107. It uses the wav2vec 2.0 objective, in 128 languages.
49
+
50
+ You can read more about the pretrained model from [this blog](https://ai.facebook.com/blog/xls-r-self-supervised-speech-processing-for-128-languages) and [this paper](https://arxiv.org/abs/2111.09296).
51
+
52
+ This model is fine-tuned version of the pretrained model (1 billion parameter variant) for Finnish ASR.
53
 
54
  ## Intended uses & limitations
55
 
56
+ You can use this model for Finnish ASR (speech-to-text) task.
57
+
58
+ ### How to use
59
+
60
+ Check the [run-finnish-asr-models.ipynb](https://huggingface.co/aapot/wav2vec2-xlsr-1b-finnish-lm/blob/main/run-finnish-asr-models.ipynb) notebook in this repository for an detailed example on how to use this model.
61
+
62
+ ### Limitations and bias
63
 
64
+ This model was fine-tuned with audio samples which maximum length was 20 seconds so this model most likely works the best for quite short audios of similar length. However, you can try this model with a lot longer audios too and see how it works. If you encounter out of memory errors with very long audio files you can use the audio chunking method introduced in [this blog post](https://huggingface.co/blog/asr-chunking).
65
+
66
+ A vast majority of the data used for fine-tuning was from the Finnish Parliament dataset so this model may not generalize so well to very different domains like common daily spoken Finnish with dialects etc. In addition, audios of the datasets tend to be adult male dominated so this model may not work as well for speeches of children and women, for example.
67
+
68
+ The Finnish KenLM language model used in the decoding phase has been trained with text data from the audio transcriptions. Thus, the decoder's language model may not generalize to very different language, for example to spoken daily language with dialects. It may be beneficial to train your own KenLM language model for your domain language and use that in the decoding.
69
+
70
+ ## Training data
71
 
72
  This model was fine-tuned with 259.57 hours of Finnish transcribed speech data from following datasets:
73
 
74
+ | Dataset | Hours | % of total hours |
75
+ |:----------------------------------------------------------------------------------------------------------------------------------|:--------:|:----------------:|
76
+ | [Common Voice 7.0 Finnish train + evaluation + other splits](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | 9.70 h | 3.74 % |
77
+ | [Finnish parliament session 2](https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4) | 0.24 h | 0.09 % |
78
+ | [VoxPopuli Finnish](https://github.com/facebookresearch/voxpopuli) | 5.94 h | 2.29 % |
79
+ | [CSS10 Finnish](https://github.com/kyubyong/css10) | 10.32 h | 3.98 % |
80
+ | [Aalto Finnish Parliament ASR Corpus](http://urn.fi/urn:nbn:fi:lb-2021051903) | 228.00 h | 87.84 % |
81
+ | [Finnish Broadcast Corpus](http://urn.fi/urn:nbn:fi:lb-2016042502) | 5.37 h | 2.07 % |
82
+
83
+ Datasets were filtered to include maximum length of 20 seconds long audio samples.
84
 
85
  ## Training procedure
86
 
87
+ This model was trained during [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by Hugging Face. Training was done on a Tesla V100 GPU, sponsored by OVHcloud.
88
+
89
+ Training script was provided by Hugging Face and it is available [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/robust-speech-event/run_speech_recognition_ctc_bnb.py). We only modified its data loading for our custom datasets.
90
+
91
+ For the KenLM language model training, we followed the [blog post tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) provided by Hugging Face. Training data for the 5-gram KenLM were text transcriptions of the audio training data.
92
+
93
  ### Training hyperparameters
94
 
95
  The following hyperparameters were used during training:
103
  - num_epochs: 5
104
  - mixed_precision_training: Native AMP
105
 
106
+ The pretrained `facebook/wav2vec2-xls-r-1b` model was initialized with following hyperparameters:
107
+ - attention_dropout: 0.094
108
+ - hidden_dropout: 0.047
109
+ - feat_proj_dropout: 0.04
110
+ - mask_time_prob: 0.082
111
+ - layerdrop: 0.041
112
+ - activation_dropout: 0.055
113
+ - ctc_loss_reduction: "mean"
114
+
115
  ### Training results
116
 
117
  | Training Loss | Epoch | Step | Validation Loss | Wer |
149
 
150
  - Transformers 4.17.0.dev0
151
  - Pytorch 1.10.2+cu102
152
+ - Datasets 1.18.3
153
  - Tokenizers 0.11.0
154
+
155
+ ## Evaluation results
156
+
157
+ Evaluation was done with the [Common Voice 7.0 Finnish test split](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
158
+
159
+ To evaluate this model, run the `eval.py` script in this repository:
160
+
161
+ ```bash
162
+ python3 eval.py --model_id aapot/wav2vec2-xlsr-1b-finnish-lm --dataset mozilla-foundation/common_voice_7_0 --config fi --split test
163
+ ```
164
+
165
+ This model (the second row of the table) achieves the following WER (Word Error Rate) and CER (Character Error Rate) results compared to our other models:
166
+
167
+ | | WER (with LM) | WER (without LM) | CER (with LM) | CER (without LM) |
168
+ |-----------------------------------------|---------------|------------------|---------------|------------------|
169
+ |aapot/wav2vec2-xlsr-1b-finnish-lm-v2 |**4.09** |**9.73** |**0.88** |**1.65** |
170
+ |aapot/wav2vec2-xlsr-1b-finnish-lm |5.65 |13.11 |1.20 |2.23 |
171
+ |aapot/wav2vec2-xlsr-300m-finnish-lm |8.16 |17.92 |1.97 |3.36 |
172
+
173
+ ## Team Members
174
+
175
+ - Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
176
+ - Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
177
+
178
+ Feel free to contact us for more details 🤗
run-finnish-asr-models.ipynb ADDED
@@ -0,0 +1 @@
 
1
+ {"cells":[{"cell_type":"markdown","metadata":{},"source":["# Run Finnish ASR models\n","Below you can see example code using Huggingface's `transformers` and `datasets` libraries to run our Finnish ASR models released at Huggingface model hub.\n","\n","On Common Voice 7.0 Finnish test dataset, our best model is [Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2) which is quite large model having 1B parameters. We also have smaller 300M parameter version which is not as good on the Common Voice test but still quite usable: [Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm](https://huggingface.co/Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm)\n","\n","Because those models are rather large, running the tests using GPU is highly recommended so you should enable the free GPU accelerator in Kaggle or Colab if you are running this notebook on those services. It's also possible to run the model testing with CPU but it will be a lot slower with large test datasets."]},{"cell_type":"markdown","metadata":{},"source":["# 1. Install libraries"]},{"cell_type":"code","execution_count":null,"metadata":{"_cell_guid":"b1076dfc-b9ad-4769-8c92-a6c4dae69d19","_uuid":"8f2839f25d086af736a60e9eeb907d3b93b6e0e5","execution":{"iopub.execute_input":"2022-02-12T15:15:54.843567Z","iopub.status.busy":"2022-02-12T15:15:54.842929Z","iopub.status.idle":"2022-02-12T15:18:01.307337Z","shell.execute_reply":"2022-02-12T15:18:01.306491Z","shell.execute_reply.started":"2022-02-12T15:15:54.843469Z"},"trusted":true},"outputs":[],"source":["!pip install -U transformers[torch-speech]==4.16.2 datasets[audio]==1.18.3 huggingface_hub==0.4.0 librosa==0.9.0 torchaudio==0.10.2 jiwer==2.3.0 requests==2.27.1 https://github.com/kpu/kenlm/archive/master.zip"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:18:01.309757Z","iopub.status.busy":"2022-02-12T15:18:01.309361Z","iopub.status.idle":"2022-02-12T15:18:09.694185Z","shell.execute_reply":"2022-02-12T15:18:09.693462Z","shell.execute_reply.started":"2022-02-12T15:18:01.309722Z"},"trusted":true},"outputs":[],"source":["import os\n","import re\n","import requests\n","import torch\n","from transformers import AutoModelForCTC, AutoProcessor, AutoConfig, pipeline\n","from datasets import load_dataset, Audio, load_metric\n","from huggingface_hub import notebook_login"]},{"cell_type":"markdown","metadata":{},"source":["# 2. Create test dataset\n","We'll use Huggingface's `datasets` library to create test dataset which offers easy methods for resampling audio data etc.\n","Basically, you have two options to create the test dataset:\n","1. Use ready dataset available at Huggingface's dataset hub (like Mozilla's Common Voice 7.0)\n","2. Load your own custom dataset from local audio files\n","\n","Below you can see examples of both methods for creating the test dataset."]},{"cell_type":"markdown","metadata":{},"source":["## Option 1: Use ready dataset from Huggingface dataset hub\n","Let's load Mozilla's Common Voice 7.0 from hub: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0\n","\n","Note: loading Common Voice 7.0 requires that you have a Huggingface user account (it's free) and that you have clicked \"Access repository\" on the dataset hub page: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0\n","\n","After clicked \"Access repository\" you need to also do the Huggingface hub notebook login and paste your Huggingface access token available in your Huggingace account settings: https://huggingface.co/settings/token\n","\n","This is not neccessary for the most datasets available at Huggingface hub but for Common Voice 7.0 it is"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:27:49.158403Z","iopub.status.busy":"2022-02-12T15:27:49.158139Z","iopub.status.idle":"2022-02-12T15:27:49.24121Z","shell.execute_reply":"2022-02-12T15:27:49.240526Z","shell.execute_reply.started":"2022-02-12T15:27:49.158373Z"},"trusted":true},"outputs":[],"source":["# do huggingface hub notebook login to be able to access the Common Voice 7.0 dataset\n","notebook_login()"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:28:10.252092Z","iopub.status.busy":"2022-02-12T15:28:10.251254Z","iopub.status.idle":"2022-02-12T15:28:37.518904Z","shell.execute_reply":"2022-02-12T15:28:37.51814Z","shell.execute_reply.started":"2022-02-12T15:28:10.252049Z"},"trusted":true},"outputs":[],"source":["# load Common Voice 7.0 dataset from Huggingface with Finnish \"test\" split\n","test_dataset = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"fi\", split=\"test\", use_auth_token=True)"]},{"cell_type":"markdown","metadata":{},"source":["## Option 2: Load custom dataset from local audio files\n","We can also load our own custom dataset from local audio files with `datasets` library. Basically you need for example an Excel/CSV/Text file having two columns: one for the transcription texts and one for the audio filepaths. You can read more about loading local data from datasets' documentation: https://huggingface.co/docs/datasets/loading.html#local-and-remote-files"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:18:09.695994Z","iopub.status.busy":"2022-02-12T15:18:09.695383Z","iopub.status.idle":"2022-02-12T15:22:03.838648Z","shell.execute_reply":"2022-02-12T15:22:03.837895Z","shell.execute_reply.started":"2022-02-12T15:18:09.695954Z"},"trusted":true},"outputs":[],"source":["# Let's download a small Finnish parliament session 2 dataset (147 audio samples) to demonstrate ASR dataset creation with custom audio files\n","# It's available here https://b2share.eudat.eu/records/4df422d631544ce682d6af1d4714b2d4\n","\n","parliament_dataset_download_path = \"./parliament_session_2\"\n","\n","os.mkdir(parliament_dataset_download_path)\n","\n","parliament_files = [\"%.2d\" % i for i in range(1, 148)]\n","\n","for file in parliament_files:\n"," url = f\"https://b2share.eudat.eu/api/files/027d2358-f28d-4f73-8a51-c174989388f9/session_2_SEG_{file}.wav\"\n"," response = requests.get(url)\n"," file_name = url.split('/')[-1]\n"," file = open(os.path.join(parliament_dataset_download_path, file_name), \"wb\")\n"," file.write(response.content)\n"," file.close()\n","\n","url = \"https://b2share.eudat.eu/api/files/027d2358-f28d-4f73-8a51-c174989388f9/session_2.trn.trn\"\n","response = requests.get(url)\n","file = open(os.path.join(parliament_dataset_download_path, \"transcript.csv\"), \"wb\")\n","file.write(response.content)\n","file.close()"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:03.840742Z","iopub.status.busy":"2022-02-12T15:22:03.840504Z","iopub.status.idle":"2022-02-12T15:22:04.326044Z","shell.execute_reply":"2022-02-12T15:22:04.325335Z","shell.execute_reply.started":"2022-02-12T15:22:03.840709Z"},"trusted":true},"outputs":[],"source":["# Let's load the local transcript CSV file so that it will have transcriptions in \"sentence\" column and audio file paths in \"audio\" column\n","test_dataset = load_dataset(\"csv\", data_files=[os.path.join(parliament_dataset_download_path, \"transcript.csv\")], delimiter=\"(\", column_names=[\"sentence\", \"audio\"], split=\"train\", encoding=\"latin-1\")"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:04.327596Z","iopub.status.busy":"2022-02-12T15:22:04.327173Z","iopub.status.idle":"2022-02-12T15:22:04.387944Z","shell.execute_reply":"2022-02-12T15:22:04.387282Z","shell.execute_reply.started":"2022-02-12T15:22:04.327556Z"},"trusted":true},"outputs":[],"source":["# We need to fix the audio filepaths so that they match with the local directory paths because they are a bit different than the original paths\n","def fix_parliament_audio_paths(batch):\n"," batch[\"audio\"] = os.path.join(parliament_dataset_download_path, batch[\"audio\"].split(\")\")[0]+\".wav\")\n"," batch[\"sentence\"] = batch[\"sentence\"].strip()\n"," return batch\n","\n","test_dataset = test_dataset.map(fix_parliament_audio_paths)"]},{"cell_type":"markdown","metadata":{},"source":["## Process audio files into numerical arrays inside the dataset\n","Note: this is needed for the dataset loaded from own local files. For Common Voice 7.0 loaded from the Huggingface model hub, this has already been done automatically for you"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:30:51.825342Z","iopub.status.busy":"2022-02-12T15:30:51.824758Z","iopub.status.idle":"2022-02-12T15:30:52.117907Z","shell.execute_reply":"2022-02-12T15:30:52.117042Z","shell.execute_reply.started":"2022-02-12T15:30:51.825302Z"},"trusted":true},"outputs":[],"source":["# Let's check one example of the test_dataset\n","# You should see \"sentence\" key having the transcription text and \"audio\" key having the path to the audio file\n","test_dataset[0]"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:31:12.966402Z","iopub.status.busy":"2022-02-12T15:31:12.965876Z","iopub.status.idle":"2022-02-12T15:31:12.978413Z","shell.execute_reply":"2022-02-12T15:31:12.977716Z","shell.execute_reply.started":"2022-02-12T15:31:12.966366Z"},"trusted":true},"outputs":[],"source":["# Let's decode audio files into arrays inside the dataset\n","# Documentation about audio processing: https://huggingface.co/docs/datasets/audio_process.html\n","test_dataset = test_dataset.cast_column(\"audio\", Audio())"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:31:15.336629Z","iopub.status.busy":"2022-02-12T15:31:15.336061Z","iopub.status.idle":"2022-02-12T15:31:15.357479Z","shell.execute_reply":"2022-02-12T15:31:15.356736Z","shell.execute_reply.started":"2022-02-12T15:31:15.33659Z"},"trusted":true},"outputs":[],"source":["# Let's check one example of the test_dataset\n","# You should see \"array\" and \"sampling_rate\" keys inside the \"audio\" dict\n","test_dataset[0]"]},{"cell_type":"markdown","metadata":{},"source":["# 3. Load Finnish ASR model for testing\n","We'll use Huggingface's `transformers` library to easily load and use models available at Huggingface's model hub\n"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:05.334043Z","iopub.status.busy":"2022-02-12T15:22:05.33375Z","iopub.status.idle":"2022-02-12T15:22:05.339128Z","shell.execute_reply":"2022-02-12T15:22:05.338305Z","shell.execute_reply.started":"2022-02-12T15:22:05.334004Z"},"trusted":true},"outputs":[],"source":["# Hugginface model hub's model ID\n","# e.g. \"Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2\" for the best 1B parameter model\n","# e.g. \"Finnish-NLP/wav2vec2-xlsr-300m-finnish-lm\" for the smaller 300M parameter model\n","asr_model_name = \"Finnish-NLP/wav2vec2-xlsr-1b-finnish-lm-v2\""]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:05.343404Z","iopub.status.busy":"2022-02-12T15:22:05.342747Z","iopub.status.idle":"2022-02-12T15:22:15.977782Z","shell.execute_reply":"2022-02-12T15:22:15.977025Z","shell.execute_reply.started":"2022-02-12T15:22:05.343365Z"},"trusted":true},"outputs":[],"source":["# load model's processor\n","processor = AutoProcessor.from_pretrained(asr_model_name)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["# OPTIONAL: change decoder's default alpha and beta parameters for language model decoding\n","# Check this video for learning more about those parameters: https://youtu.be/mp7fHMTnK9A?t=1418\n","# TLDR: alpha is the weight of the LM so lower the alpha for LM to have less effect and higher the alpha to increase its effect\n","processor.decoder.reset_params(\n"," alpha=0.5, # 0.5 by default\n"," beta=1.5, # 1.5 by default\n",")"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:22:15.979402Z","iopub.status.busy":"2022-02-12T15:22:15.979151Z","iopub.status.idle":"2022-02-12T15:23:58.143001Z","shell.execute_reply":"2022-02-12T15:23:58.14223Z","shell.execute_reply.started":"2022-02-12T15:22:15.979367Z"},"trusted":true},"outputs":[],"source":["# load model and its config\n","model = AutoModelForCTC.from_pretrained(asr_model_name)\n","config = AutoConfig.from_pretrained(asr_model_name)"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:23:58.144695Z","iopub.status.busy":"2022-02-12T15:23:58.144215Z","iopub.status.idle":"2022-02-12T15:24:06.408931Z","shell.execute_reply":"2022-02-12T15:24:06.408187Z","shell.execute_reply.started":"2022-02-12T15:23:58.144661Z"},"trusted":true},"outputs":[],"source":["# Let's use Huggingface's easy-to-use ASR pipeline loaded with our model to transcribe our audio data\n","# To use GPU in the ASR pipeline, \"device\" needs to be 0, for CPU it should be -1\n","device = 0 if torch.cuda.is_available() else -1\n","asr = pipeline(\"automatic-speech-recognition\", model=model, config=config, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, decoder=processor.decoder, device=device)"]},{"cell_type":"markdown","metadata":{},"source":["# 4. Resample test dataset to the correct sampling rate required by the model\n","Our models are trained with audio data sampled at 16000 kHz so you need to use them with audio sampled at the same 16000 kHz. Luckily, Huggingface's `datasets` library offers easy ready method for resampling our testing dataset into correct sampling rate."]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:33:26.348746Z","iopub.status.busy":"2022-02-12T15:33:26.348448Z","iopub.status.idle":"2022-02-12T15:33:26.358524Z","shell.execute_reply":"2022-02-12T15:33:26.357815Z","shell.execute_reply.started":"2022-02-12T15:33:26.348717Z"},"trusted":true},"outputs":[],"source":["# Get the model's sampling rate (16000 with our models)\n","sampling_rate = processor.feature_extractor.sampling_rate\n","\n","# Resample our test dataset\n","test_dataset = test_dataset.cast_column(\"audio\", Audio(sampling_rate=sampling_rate))"]},{"cell_type":"markdown","metadata":{},"source":["# 5. Run test dataset through the model's ASR pipeline to get predicted transcriptions"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:24:06.42029Z","iopub.status.busy":"2022-02-12T15:24:06.419997Z","iopub.status.idle":"2022-02-12T15:24:06.432339Z","shell.execute_reply":"2022-02-12T15:24:06.431597Z","shell.execute_reply.started":"2022-02-12T15:24:06.420255Z"},"trusted":true},"outputs":[],"source":["# Test dataset's true target transcriptions can e.g. include special characters not relevant for ASR testing,\n","# so let's create target transcription text normalization function\n","def normalize_text(text: str) -> str:\n"," \"\"\"DO ADAPT FOR YOUR USE CASE. this function normalizes the target text transcription.\"\"\"\n","\n"," CHARS_TO_IGNORE = [\",\", \"?\", \"¿\", \".\", \"!\", \"¡\", \";\", \";\", \":\", '\"\"', \"%\", '\"', \"�\", \"ʿ\", \"·\", \"჻\", \"~\", \"՞\",\n"," \"؟\", \"،\", \"।\", \"॥\", \"«\", \"»\", \"„\", \"“\", \"”\", \"「\", \"」\", \"‘\", \"’\", \"《\", \"》\", \"(\", \")\", \"[\", \"]\",\n"," \"{\", \"}\", \"=\", \"`\", \"_\", \"+\", \"<\", \">\", \"…\", \"–\", \"°\", \"´\", \"ʾ\", \"‹\", \"›\", \"©\", \"®\", \"—\", \"→\", \"。\",\n"," \"、\", \"﹂\", \"﹁\", \"‧\", \"~\", \"﹏\", \",\", \"{\", \"}\", \"(\", \")\", \"[\", \"]\", \"【\", \"】\", \"‥\", \"〽\",\n"," \"『\", \"』\", \"〝\", \"〟\", \"⟨\", \"⟩\", \"〜\", \":\", \"!\", \"?\", \"♪\", \"؛\", \"/\", \"\\\\\", \"º\", \"−\", \"^\", \"ʻ\", \"ˆ\"]\n"," \n"," chars_to_remove_regex = f\"[{re.escape(''.join(CHARS_TO_IGNORE))}]\"\n","\n"," text = re.sub(chars_to_remove_regex, \"\", text.lower())\n"," text = re.sub(\"[-]\", \" \", text)\n","\n"," # In addition, we can normalize the target text, e.g. removing new lines characters etc...\n"," # note that order is important here!\n"," token_sequences_to_ignore = [\"\\n\\n\", \"\\n\", \" \", \" \"]\n","\n"," for t in token_sequences_to_ignore:\n"," text = \" \".join(text.split(t))\n","\n"," return text"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:24:06.434179Z","iopub.status.busy":"2022-02-12T15:24:06.433435Z","iopub.status.idle":"2022-02-12T15:24:06.440821Z","shell.execute_reply":"2022-02-12T15:24:06.440014Z","shell.execute_reply.started":"2022-02-12T15:24:06.434143Z"},"trusted":true},"outputs":[],"source":["# function used to get predicted transcriptions by the model and also do the target transcription normalization at the same time\n","def map_to_pred(batch):\n"," prediction = asr(batch[\"audio\"][\"array\"])\n"," # for very long audios (e.g. over 30 min) you may have to add audio chunking to avoid memory errors, read more here: https://huggingface.co/blog/asr-chunking\n"," # for example: prediction = asr(batch[\"audio\"][\"array\"], chunk_length_s=6, stride_length_s=(2, 2))\n","\n"," batch[\"prediction\"] = prediction[\"text\"]\n"," batch[\"target\"] = normalize_text(batch[\"sentence\"]) # normalize target text (e.g. make it lower case and remove punctuation)\n"," return batch"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:33:34.74169Z","iopub.status.busy":"2022-02-12T15:33:34.741424Z","iopub.status.idle":"2022-02-12T15:37:09.71817Z","shell.execute_reply":"2022-02-12T15:37:09.717482Z","shell.execute_reply.started":"2022-02-12T15:33:34.741661Z"},"trusted":true},"outputs":[],"source":["# Let's run our test dataset with the previosly defined function to get the results\n","# This can take some time with large test datasets or if you run with CPU\n","result = test_dataset.map(map_to_pred, remove_columns=test_dataset.column_names)"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:17.693528Z","iopub.status.busy":"2022-02-12T15:37:17.692981Z","iopub.status.idle":"2022-02-12T15:37:17.698636Z","shell.execute_reply":"2022-02-12T15:37:17.697998Z","shell.execute_reply.started":"2022-02-12T15:37:17.69349Z"},"trusted":true},"outputs":[],"source":["# Let's check one example of the results\n","# You should see \"prediction\" key having the model's transcription prediction and \"target\" key having the original target transcription\n","result[0]"]},{"cell_type":"markdown","metadata":{},"source":["# 6. Compute WER and CER metrics for the results\n","Let's use Huggingface's `datasets` library's standard WER (Word Error Rate) and CER (Character Error Rate) metric methods"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:23.308834Z","iopub.status.busy":"2022-02-12T15:37:23.30826Z","iopub.status.idle":"2022-02-12T15:37:24.637832Z","shell.execute_reply":"2022-02-12T15:37:24.637113Z","shell.execute_reply.started":"2022-02-12T15:37:23.308794Z"},"trusted":true},"outputs":[],"source":["# load ASR metrics from Huggingface's datasets library\n","wer = load_metric(\"wer\")\n","cer = load_metric(\"cer\")"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:25.893502Z","iopub.status.busy":"2022-02-12T15:37:25.892888Z","iopub.status.idle":"2022-02-12T15:37:26.0871Z","shell.execute_reply":"2022-02-12T15:37:26.086383Z","shell.execute_reply.started":"2022-02-12T15:37:25.893464Z"},"trusted":true},"outputs":[],"source":["# compute ASR metrics\n","wer_result = wer.compute(references=result[\"target\"], predictions=result[\"prediction\"])\n","cer_result = cer.compute(references=result[\"target\"], predictions=result[\"prediction\"])"]},{"cell_type":"code","execution_count":null,"metadata":{"execution":{"iopub.execute_input":"2022-02-12T15:37:27.263482Z","iopub.status.busy":"2022-02-12T15:37:27.26322Z","iopub.status.idle":"2022-02-12T15:37:27.270282Z","shell.execute_reply":"2022-02-12T15:37:27.269442Z","shell.execute_reply.started":"2022-02-12T15:37:27.263445Z"},"trusted":true},"outputs":[],"source":["# print metric results\n","result_str = f\"WER: {wer_result}\\n\" f\"CER: {cer_result}\"\n","print(result_str)"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":[]}],"metadata":{"kernelspec":{"display_name":"Python 3","language":"python","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.7.12"}},"nbformat":4,"nbformat_minor":4}