Can Yi-34B-Chat-4bits use vllm for inference?

#4
by wangdafa - opened

The start command is as follows:
python -m fastchat.serve.vllm_worker --model-path 01-ai/Yi-34B-Chat-4bits --trust-remote-code --tensor-parallel-size 2 --quantization awq --max-model-len 4096 --model-name Qwen-14B-Chat
I found that the prompt template of Yi-34B-Chat is the same as Qwen-chat. When using vllm, you can specify the template by using the method of "--model-name Qwen-14B-Chat". If "--model-name Qwen-14B-Chat" is not added, the template will become the default one-shot of fastchat. However, starting the model in this way will cause the model to not output any content, although the model has already been loaded.

Please keep an eye on https://github.com/lm-sys/FastChat/pull/2723 to see if it can resolve your issue.

01-ai org

Please try the main branch of vllm.

tianjun changed discussion status to closed

Sign up or log in to comment