How to use this model?

#1
by pseudotensor - opened
python -m sglang.launch_server --model-path lmms-lab/LLaVA-NeXT-Video-34B --port=30004 --host="0.0.0.0" --tp-size=1 --random-seed=1234 --context-length=8192

fails with:

/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
config.json: 100%|???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 1.80k/1.80k [00:00<00:00, 17.7MB/s]
/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:100: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
  warnings.warn(
tokenizer_config.json: 100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 1.62k/1.62k [00:00<00:00, 20.2MB/s]
Traceback (most recent call last):
  File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
    response.raise_for_status()
  File "/home/ubuntu/miniconda3/envs/sglang/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/lmms-lab/LLaVA-NeXT-Video-34B/resolve/main/preprocessor_config.json

Sign up or log in to comment