ksingla025's picture
Update README.md
21826c0 verified
metadata
language:
  - en
license: cc-by-nc-nd-4.0
library_name: nemo
datasets:
  - commonvoice
thumbnail: null
tags:
  - automatic-speech-recognition
  - speech
  - audio
  - CTC
  - named-entity-recognition
  - emotion-classification
  - Transformer
  - NeMo
  - pytorch
model-index:
  - name: 1step_ctc_ner_emotion_commonvoice500hrs
    results: []

This speech tagger performs transcription, annotates entities, predict speaker emotion

Model is suitable for voiceAI applications, real-time and offline.

Model Details

  • Model type: NeMo ASR
  • Architecture: Conformer CTC
  • Language: English
  • Training data: CommonVoice, Gigaspeech
  • Performance metrics: [Metrics]

Usage

To use this model, you need to install the NeMo library:

pip install nemo_toolkit

How to run

import nemo.collections.asr as nemo_asr

# Step 1: Load the ASR model from Hugging Face
model_name = 'WhissleAI/speech-tagger_en_ner_emotion'
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name)

# Step 2: Provide the path to your audio file
audio_file_path = '/path/to/your/audio_file.wav'

# Step 3: Transcribe the audio
transcription = asr_model.transcribe(paths2audio_files=[audio_file_path])
print(f'Transcription: {transcription[0]}')