|
|
|
Resource and Documentation Guide |
|
-------------------------------- |
|
|
|
Hands-on speaker recognition tutorial notebooks can be found under |
|
`the speaker recognition tutorials folder <https://github.com/NVIDIA/NeMo/tree/stable/tutorials/speaker_tasks/>`_. This and most other tutorials can be run on Google Colab by specifying the link to the notebooks' GitHub pages on Colab. |
|
|
|
If you are looking for information about a particular SpeakerNet model, or would like to find out more about the model |
|
architectures available in the ``nemo_asr`` collection, check out the :doc:`Models <./models>` page. |
|
|
|
Documentation on dataset preprocessing can be found on the :doc:`Datasets <./datasets>` page. |
|
NeMo includes preprocessing and other scripts for speaker_recognition in <nemo/scripts/speaker_tasks/> folder, and this page contains instructions on running |
|
those scripts. It also includes guidance for creating your own NeMo-compatible dataset, if you have your own data. |
|
|
|
Information about how to load model checkpoints (either local files or pretrained ones from NGC), perform inference, as well as a list |
|
of the checkpoints available on NGC are located on the :doc:`Checkpoints <./results>` page. |
|
|
|
Documentation for configuration files specific to the ``nemo_asr`` models can be found on the |
|
:doc:`Configuration Files <./configs>` page. |
|
|
|
|
|
For a clear step-by-step tutorial we advise you to refer to the tutorials found in `folder <https://github.com/NVIDIA/NeMo/tree/stable/tutorials/speaker_tasks/>`_. |
|
|