WhisperFusion / README.md
utkarsh-dixit's picture
feat: add dockerfile
8783d1e
metadata
title: WhisperFusion
emoji: πŸŒ–
colorFrom: pink
colorTo: green
sdk: docker
python_version: '3.10'
sdk_version: latest
suggested_hardware: t4-small
suggested_storage: medium
app_file: examples/chatbot/html/main.py
app_port: 7860
base_path: /
fullWidth: false
models:
  - teknium/OpenHermes-2.5-Mistral-7B
datasets: []
tags:
  - AI
  - chatbot
  - speech-to-text
  - real-time
  - TensorRT
  - LLM
pinned: false
hf_oauth: false
hf_oauth_scopes: []
disable_embedding: false
startup_duration_timeout: 30m
custom_headers:
  cross-origin-embedder-policy: require-corp
  cross-origin-opener-policy: same-origin
  cross-origin-resource-policy: cross-origin
preload_from_hub:
  - >-
    NVIDIA/TensorRT-LLM
    examples/whisper/whisper_small_en,examples/phi/phi_engine,examples/phi/phi-2
description: >-
  WhisperFusion is an AI chatbot that provides ultra-low latency conversations.
  It integrates Mistral, a Large Language Model (LLM), on top of the real-time
  speech-to-text pipeline. It utilizes OpenAI WhisperLive to convert spoken
  language into text in real-time and is optimized to run as TensorRT engines,
  ensuring high-performance and low-latency processing.
installation: >-
  Install
  [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/installation.md)
  to build Whisper and Mistral TensorRT engines. Refer to the README and the
  [Dockerfile.multi](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docker/Dockerfile.multi)
  to install the required packages in the base pytorch docker image.
usage: Run the main.py script with the appropriate arguments to start the chatbot.
source: This information is provided by Hugging Face.