glm-4-voice-9b-int4 / README_en.md
cydxg's picture
Update README_en.md
6a1cf36 verified
metadata
base_model:
  - THUDM/glm-4-voice-9b
base_model_relation: quantized

GLM-4-Voice-9B (INT4 Quantized)

中文 | English

Model Overview

GLM-4-Voice is an end-to-end speech model developed by Zhipu AI. It can directly understand and generate speech in both Chinese and English, facilitating real-time voice conversations. The model can also modify voice attributes such as emotion, tone, speech rate, and dialect based on user instructions. This repository features the INT4 quantized version of GLM-4-Voice-9B. After optimization, the memory requirements are significantly reduced, requiring only 12GB of GPU memory to run smoothly. Testing has shown that this model runs well on an NVIDIA GeForce RTX 3060 with 12GB of memory.

Usage Instructions

Creating a Virtual Environment

First, ensure you are using Python 3.10, and create a virtual environment:

# Confirmed not compatible with python3.8/3.9/3.12 due to library compatibility issues
conda create -n GLM-4-Voice python=3.10

Activate the Virtual Environment and Clone the Model

After activating the virtual environment, clone the model and code:

conda activate GLM-4-Voice
git clone https://huggingface.co/cydxg/glm-4-voice-9b-int4

For users in mainland China, you can use the following command to clone:

git clone https://hf-mirror.com/cydxg/glm-4-voice-9b-int4

Install Dependencies

Navigate to the model directory and install the required dependencies:

cd glm-4-voice-9b-int4
conda install -c conda-forge openfst
conda install -c conda-forge pynini==2.1.5
pip install -r requirements.txt
mkdir third_party
cd third_party
git clone https://github.com/shivammehta25/Matcha-TTS Matcha-TTS
# Choose the appropriate version of torch based on your CUDA version
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.4 -c pytorch -c nvidia

Start the Model Service

First, start the model service:

python model_server.py

Run the Web Demo

Next, run the web demo to access the model:

python web_demo.py

You can then access the model by visiting http://localhost:8888.

Additional Dependencies

If running the web_demo prompts that matcha.models is missing, you might see the following error:

ModuleNotFoundError: No module named 'matcha.models'; 'matcha' is not a package

In this case, you need to install matcha-tts:

# First, uninstall gradio and diffusers to avoid version conflicts
pip uninstall gradio
pip uninstall diffusers
pip install matcha-tts