Update README.md
Browse files
README.md
CHANGED
@@ -49,14 +49,13 @@ The following people contributed to building this model: Rolv-Arild Braaten, Per
|
|
49 |
## Training procedure
|
50 |
To reproduce these results, we strongly recommend that you follow the [instructions from HuggingFace](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
|
51 |
|
52 |
-
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running this will create all the other necessary files, and
|
53 |
|
54 |
-
###
|
55 |
-
|
56 |
|
57 |
|
58 |
### Parameters
|
59 |
-
|
60 |
The following parameters were used during training:
|
61 |
```
|
62 |
--dataset_name="NbAiLab/NPSC"
|
@@ -100,7 +99,7 @@ The following parameters were used during training:
|
|
100 |
--preprocessing_num_workers="16"
|
101 |
```
|
102 |
|
103 |
-
Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by
|
104 |
|
105 |
| Parameter| Comment |
|
106 |
|:-------------|:-----|
|
|
|
49 |
## Training procedure
|
50 |
To reproduce these results, we strongly recommend that you follow the [instructions from HuggingFace](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#talks) to train a simple Swedish model.
|
51 |
|
52 |
+
When you have verified that you are able to do this, create a fresh new repo. You can then start by copying the files ```run.sh``` and ```run_speech_recognition_ctc.py``` from our repo. Running this will create all the other necessary files, and should let you reproduce our results. With some tweaks to the hyperparameters, you might even be able to build an even better ASR. Good luck!
|
53 |
|
54 |
+
### Language Model
|
55 |
+
As you see from the results above, adding even a simple 5-gram language will significantly improve the results. 🤗 has provided another [very nice blog](https://huggingface.co/blog/wav2vec2-with-ngram) about how to add a 5-gram language model to improve the ASR model. You can build this from your own corpus, for instance by extracting some suitable text from the [Norwegian Colossal Corpus](https://huggingface.co/datasets/NbAiLab/NCC). You can also skip some of the steps in the guide, and copy the [5-gram model from this repo](https://huggingface.co/NbAiLab/XLSR-300M-bokmaal/tree/main/language_model).
|
56 |
|
57 |
|
58 |
### Parameters
|
|
|
59 |
The following parameters were used during training:
|
60 |
```
|
61 |
--dataset_name="NbAiLab/NPSC"
|
|
|
99 |
--preprocessing_num_workers="16"
|
100 |
```
|
101 |
|
102 |
+
Following this settings, the training might take 3-4 days on an average GPU. You should however get a decent model and faster results by tweaking these parameters
|
103 |
|
104 |
| Parameter| Comment |
|
105 |
|:-------------|:-----|
|