SpeechLLM
Collection
Multi-Modal speech LLMs
•
2 items
•
Updated
[The model is still training, we will be releasing the latest checkpoints soon...]
SpeechLLM is a multi-modal LLM trained to predict the metadata of the speaker's turn in a conversation. speechllm-2B model is based on HubertX audio encoder and TinyLlama LLM. The model predicts the following:
# Load model directly from huggingface
from transformers import AutoModel
model = AutoModel.from_pretrained("skit-ai/speechllm-2B", trust_remote_code=True)
model.generate_meta(
audio_path="path-to-audio.wav",
instruction="Give me the following information about the audio [SpeechActivity, Transcript, Gender, Emotion, Age, Accent]",
max_new_tokens=500,
return_special_tokens=False
)
# Model Generation
'''
{
"SpeechActivity" : "True",
"Transcript": "Yes, I got it. I'll make the payment now.",
"Gender": "Female",
"Emotion": "Neutral",
"Age": "Young",
"Accent" : "America",
}
'''
Dataset | Word Error Rate | Gender Acc | Age Acc | Accent Acc |
---|---|---|---|---|
librispeech-test-clean | 7.36 | 0.9490 | ||
librispeech-test-other | 10.47 | 0.9099 | ||
CommonVoice test | 24.47 | 0.8680 | 0.6061 | 0.6156 |