Can VLLM be used for loading?

#4
by wawoshashi - opened

Can VLLM be used for loading?

It was quantized with Aphrodite-engine in mind but it should work on vLLM too.

Traceback (most recent call last):
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/entrypoints/openai/api_server.py", line 157, in <module>
    engine = AsyncLLMEngine.from_engine_args(
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 348, in from_engine_args
    engine = cls(
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 311, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/engine/async_llm_engine.py", line 422, in _init_engine
    return engine_class(*args, **kwargs)
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/engine/llm_engine.py", line 110, in __init__
    self.model_executor = executor_class(model_config, cache_config,
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/executor/ray_gpu_executor.py", line 62, in __init__
    self._init_workers_ray(placement_group)
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/executor/ray_gpu_executor.py", line 192, in _init_workers_ray
    self._run_workers(
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/executor/ray_gpu_executor.py", line 324, in _run_workers
    driver_worker_output = getattr(self.driver_worker,
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/worker/worker.py", line 107, in load_model
    self.model_runner.load_model()
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 95, in load_model
    self.model = get_model(
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/model_executor/model_loader.py", line 101, in get_model
    model.load_weights(model_config.model, model_config.download_dir,
  File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/model_executor/models/commandr.py", line 325, in load_weights
    param = params_dict[name]
KeyError: 'model.layers.42.mlp.down_proj.bias'
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44] Error executing method load_model. This might cause deadlock in distributed execution.
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44] Traceback (most recent call last):
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]   File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/engine/ray_utils.py", line 37, in execute_method
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]     return executor(*args, **kwargs)
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]   File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/worker/worker.py", line 107, in load_model
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]     self.model_runner.load_model()
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]   File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/worker/model_runner.py", line 95, in load_model
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]     self.model = get_model(
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]   File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/model_executor/model_loader.py", line 101, in get_model
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]     model.load_weights(model_config.model, model_config.download_dir,
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]   File "/home/czb/miniconda3/envs/vllm/lib/python3.9/site-packages/vllm/model_executor/models/commandr.py", line 325, in load_weights
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44]     param = params_dict[name]
(RayWorkerVllm pid=16133) ERROR 04-13 15:05:36 ray_utils.py:44] KeyError: 'model.layers.42.mlp.down_proj.bias'
(RayWorkerVllm pid=16133) INFO 04-13 15:05:31 pynccl_utils.py:45] vLLM is using nccl==2.18.1 [repeated 2x across cluster]
(RayWorkerVllm pid=15910) WARNING 04-13 15:05:33 custom_all_reduce.py:45] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
(RayWorkerVllm pid=16133) INFO 04-13 15:05:35 weight_utils.py:177] Using model weights format ['*.safetensors'] [repeated 2x across cluster]``` 

You need latest vLLM for this to work.

You need the latest version of VLLM for this to work.

Okay. It works

I have installed vLLM==0.4.0.post and I have the same issue, how did you manage to solve it @wawoshashi ?

I have installed vLLM==0.4.0.post and I have the same issue, how did you manage to solve it @wawoshashi ?

@ordkill You need to pip install -e . from src in vllm main branch

Sign up or log in to comment