vllm support
#2
by
Sihangli
- opened
Hi, is there any guidance to inference with vllm? Seems not supported by current version of vllm (0.4.2).
Thank you for your interest in our work. We are aware of the challenges in implementing KV compression on current open-source code and are actively working on it. The HuggingFace's code is not as efficient as we would like, so we're developing a new open-source code using vLLM for better performance. The open-source vLLM code including KV compression will be released once it is ready.
+1
This comment has been hidden
Hello, does vllm currently support deepseekv2chat model? Please give me some guidance
Hi, does vllm currently support DeepSeekV2chat model? Looking forward to your reply.