Spaces:
Runtime error
Runtime error
gorkemgoknar
commited on
Commit
•
32524de
1
Parent(s):
6bf8c0d
add definitions
Browse files
app.py
CHANGED
@@ -749,10 +749,11 @@ with gr.Blocks(title=title) as demo:
|
|
749 |
gr.Markdown(
|
750 |
"""
|
751 |
This Space demonstrates how to speak to a chatbot, based solely on open-source models.
|
752 |
-
It relies on 3 models:
|
753 |
-
|
754 |
-
|
755 |
-
|
|
|
756 |
|
757 |
Note:
|
758 |
- By using this demo you agree to the terms of the Coqui Public Model License at https://coqui.ai/cpml
|
|
|
749 |
gr.Markdown(
|
750 |
"""
|
751 |
This Space demonstrates how to speak to a chatbot, based solely on open-source models.
|
752 |
+
It relies on 3 stage models:
|
753 |
+
Speech to Text : [Whisper-large-v2](https://sanchit-gandhi-whisper-large-v2.hf.space/) as an ASR model, to transcribe recorded audio to text. It is called through a [gradio client](https://www.gradio.app/docs/client).
|
754 |
+
LLM Model : [Mistral-7b-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as the chat model, GGUF Q5_K_M quantized version used locally via llama_cpp[huggingface_hub](TheBloke/Mistral-7B-Instruct-v0.1-GGUF).
|
755 |
+
With LLM_MODEL="zephyr" it can use [Zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) as the chat model. GGUF Q5_K_M quantized version used locally via llama_cpp from [huggingface.co/TheBloke](https://huggingface.co/TheBloke/zephyr-7B-alpha-GGUF).
|
756 |
+
Text to Speech : [Coqui's XTTS](https://huggingface.co/spaces/coqui/xtts) as a Multilingual TTS model, to generate the chatbot answers. This time, the model is hosted locally.
|
757 |
|
758 |
Note:
|
759 |
- By using this demo you agree to the terms of the Coqui Public Model License at https://coqui.ai/cpml
|