File size: 1,184 Bytes
0d87d25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: mit
language:
- en
base_model: Qwen/Qwen2-0.5B
---


<p align="center"><strong style="font-size: 18px;">
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
</strong>
</p>

<p align="center">
πŸ€— <a href="">Hugging Face</a>   | πŸ“– <a href="https://github.com/gpt-omni/mini-omni">Github</a> 
|     πŸ“‘ <a href="https://arxiv.org/abs/2408.16725">Technical report</a>
</p>

**This is a safetensors conversion of `gpt-omni/mini-omni`.**

Mini-Omni is an open-source multimodel large language model that can **hear, talk while thinking**. Featuring real-time end-to-end speech input and **streaming audio output** conversational capabilities.

<p align="center">
    <img src="frameworkv3.jpg" width="100%"/>
</p>


## Features

βœ… **Real-time speech-to-speech** conversational capabilities. No extra ASR or TTS models required.

βœ… **Talking while thinking**, with the ability to generate text and audio at the same time.

βœ… **Streaming audio outupt** capabilities.

βœ… With "Audio-to-Text" and "Audio-to-Audio" **batch inference** to further boost the performance.

**NOTE**: please refer to https://github.com/gpt-omni/mini-omni for more details.