encountered error when loading model

#4
by zhouzr - opened

vllm 0.3.1

ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a tokenizers library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.

need 0.4.1

but 0.4.1 is not out yet? only 0.4.0.post1?

need 0.4.1

vllm 0.4.1 is in "pre-release", see https://github.com/vllm-project/vllm/releases

It does seem to run on v0.4.0.post1, as the generation_config.json has the updated eos_token, see https://github.com/vllm-project/vllm/issues/4180#issuecomment-2066187578

Some versions of vLLM have issues with quantized models. v0.4.0.post1 is the latest that I have confirmed to work personally.

I've just built a docker image for the latest vLLM (for their dev version, so it is v0.4.1.dev, see https://hub.docker.com/r/aiappsref/vllm/tags) and I have just tested it with this quantized model and it seems to be working.
It actually works better than v0.4.0.post1, as that was not finishing its response at the EOS but kept going (that was a known bug fixed in v0.4.1)

I still have it on 0.4.1

Sign up or log in to comment