runtime error

s.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta ' /usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py:2025: UserWarning: for vision_model.post_layernorm.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?) warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta ' /usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py:2025: UserWarning: for vision_model.post_layernorm.bias: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?) warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta ' Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 4/4 [00:00<00:00, 78033.56it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 67, in <module> handler = Chat(model_path, conv_mode=conv_mode, File "/home/user/app/llava/serve/gradio_utils.py", line 155, in __init__ self.tokenizer, self.model, self.processor, context_len = load_pretrained_model( File "/home/user/app/llava/model/builder.py", line 145, in load_pretrained_model model = LlavaLlamaForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3917, in from_pretrained dispatch_model(model, **device_map_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 387, in dispatch_model raise ValueError( ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.

Container logs:

Fetching error logs...