vllm support

by Sihangli - opened

Hi, is there any guidance to inference with vllm? Seems not supported by current version of vllm (0.4.2).

DeepSeek org

Thank you for your interest in our work. We are aware of the challenges in implementing KV compression on current open-source code and are actively working on it. The HuggingFace's code is not as efficient as we would like, so we're developing a new open-source code using vLLM for better performance. The open-source vLLM code including KV compression will be released once it is ready.

@msr2000 Thanks for your efforts! Is there any difference between the open-source model in HF and the API version you provide?

This comment has been hidden

Hello, does vllm currently support deepseekv2chat model? Please give me some guidance

Hi, does vllm currently support DeepSeekV2chat model? Looking forward to your reply.

Sign up or log in to comment