Update README.md
Browse files
README.md
CHANGED
@@ -58,6 +58,7 @@ The NeMo toolkit [4] was used for training the models for around 100 epochs. The
|
|
58 |
|
59 |
The tokenizers for these models were built using the semantics annotations of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). We use a vocabulary size of 58, including the BOS, EOS and padding tokens.
|
60 |
|
|
|
61 |
|
62 |
### Datasets
|
63 |
|
|
|
58 |
|
59 |
The tokenizers for these models were built using the semantics annotations of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). We use a vocabulary size of 58, including the BOS, EOS and padding tokens.
|
60 |
|
61 |
+
Details on how to train the model can be found [here](https://github.com/NVIDIA/NeMo/blob/main/examples/slu/speech_intent_slot/README.md).
|
62 |
|
63 |
### Datasets
|
64 |
|