just like in this discussion, using this model in vllm got this issue
tested by using nightly docker image"vllm/vllm-openai@sha256:0d0104a260b69ce0bff9badde7620b8d796abfd067c327451f7ae0b09c761c9f"
"vllm/vllm-openai@sha256:0d0104a260b69ce0bff9badde7620b8d796abfd067c327451f7ae0b09c761c9f"
· Sign up or log in to comment