Edit model card

Model Card for Ultravox

Ultravox is a multimodal Speech LLM built around a pretrained Whisper and Llama 3 backbone. See https://ultravox.ai for the GitHub repo and more information.

Model Details

Model Description

Ultravox is a multimodal model that can consume both speech and text as input (e.g., a text system prompt and voice user message). The input to the model is given as a text prompt with a special <|audio|> pseudo-token, and the model processor will replace this magic token with embeddings derived from the input audio. Using the merged embeddings as input, the model will then generate output text as usual.

In a future revision of Ultravox, we plan to expand the token vocabulary to support generation of semantic and acoustic audio tokens, which can then be fed to a vocoder to produce voice output. No preference tuning has been applied to this revision of the model.

  • Developed by: Fixie.ai
  • License: MIT

Model Sources [optional]


Voice agents, speech-to-speech translation, analysis of spoken audio

Training Details

Training Data

[More Information Needed]

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]


Testing Data, Factors & Metrics

Testing Data

[More Information Needed]


[More Information Needed]


[More Information Needed]


[More Information Needed]


Downloads last month
Model size
8.06B params
Tensor type
Unable to determine this model’s pipeline type. Check the docs .