--- language: - zgh - kab - shi - rif - tzm license: cc-by-4.0 library_name: nemo datasets: - mozilla-foundation/common_voice_17_0 thumbnail: null tags: - automatic-speech-recognition - speech - audio - TDT - FastConformer - Transducer - NeMo - pytorch model-index: - name: stt_zgh_fastconformer_transducer_small results: - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 config: zgh split: test args: language: zgh metrics: - name: Test WER type: wer value: 72.44 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 config: zgh split: test args: language: zgh metrics: - name: Test CER type: cer value: 26.56 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 config: kab split: test args: language: kab metrics: - name: Test WER type: wer value: 39.78 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 config: kab split: test args: language: kab metrics: - name: Test CER type: cer value: 15.81 metrics: - wer - cer pipeline_tag: automatic-speech-recognition --- ## Model Overview ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['asr'] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.ASRModel.from_pretrained("ayymen/stt_zgh_fastconformer_transducer_small") ``` ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="ayymen/stt_zgh_fastconformer_transducer_small" audio_dir="" ``` ### Input This model accepts 16000 KHz Mono-channel Audio (wav files) as input. ### Output This model provides transcribed speech as a string for a given audio sample. ## Model Architecture ## Training The model was trained for 48 epochs on a NVIDIA GeForce RTX 4050 Laptop GPU. ### Datasets Common Voice 17 *kab* and *zgh* splits plus bible readings in Tachelhit and Tarifit. ## Performance Metrics are computed on the cleaned, non-punctuated test sets of *zgh* and *kab* (converted to Tifinagh). ## Limitations Eg: Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. ## References [1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)