Fails to run with nm-vllm

#1
by clintonruairi - opened

Hello,

python version: 3.11
os: WSL

Only other dependencies are those specified by nm-vllm. Followed the instructions here:

https://github.com/neuralmagic/nm-vllm

Ran the model as such, after installing all of the dependencies in a fresh conda env:

(nm-vllm) unix@rog-zephyrus:/code/structure$ pip install nm-vllm[sparse]
(nm-vllm) unix@rog-zephyrus:
/code/structure$ python -m vllm.entrypoints.openai.api_server --model neuralmagic/Meta-Llama-3-8B-Instruct-FP8 --sparsity sparse_w16a16
INFO 05-11 23:41:26 api_server.py:149] vLLM API server version 0.2.0

warnings.warn(
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 157, in
engine = AsyncLLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/engine/async_llm_engine.py", line 331, in from_engine_args
engine_configs = engine_args.create_engine_configs()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/engine/arg_utils.py", line 405, in create_engine_configs
model_config = ModelConfig(
^^^^^^^^^^^^
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/config.py", line 133, in init
self._verify_quantization()
File "/home/unix/miniconda3/envs/nm-vllm/lib/python3.11/site-packages/vllm/config.py", line 234, in _verify_quantization
raise ValueError(
ValueError: Unknown quantization method: fp8. Must be one of ['awq', 'gptq', 'squeezellm', 'marlin'].

am I doing something wrong here? I also tried to run the model in standard vllm, v0.4.2. Performance was great, but about 30% of responses were bizarre, lots of responses containing "!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!".

Neural Magic org

Fp8 will be supported in our next release of nm-vllm

There is a bug in v0.4.2 for fp8 static quantization. It is resolved on the latest main and will be fixed in v0.4.3

So I would suggest installing vllm from source

Sign up or log in to comment