在使用 lmms-lab/llava-critic-7b 模型进行多模态推理生成时,遇到了 "Floating point exception (core dumped)"
错误日志:
opt/miniconda3/envs/llava/lib/python3.10/site-packages/huggingface_hub/file_download.py:1142: FutureWarning: resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True
.
warnings.warn(
Loaded LLaVA model: /HuggingFace-Download-Accelerator/hf_hub/models--lmms-lab--llava-critic-7b
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are using a model of type llava to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield errors.
Loading vision tower: google/siglip-so400m-patch14-384
Loading vision tower from local path: /HuggingFace-Download-Accelerator/hf_hub/models--google--siglip-so400m-patch14-384
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:20<00:00, 5.21s/it]
Model Class: LlavaQwenForCausalLM
[Loading pretrained model] completed in 35.25 seconds
Loading image from:/data_sample/critic_img_seven.png
[Processing image] completed in 0.33 seconds
[Setting up conversation template] completed in 0.00 seconds
Tokenizing input question...
[Tokenizing input] completed in 0.00 seconds
Generating model response...
input_ids device: cuda:7
image_tensor[0] device: cuda:7
image_tensor[0] dtype: torch.float16
Parameter model.image_newline dtype: torch.float16
Floating point exception (core dumped)
修改了代码从本地导入模型以及本地打开图片的部分以及指定显卡cuda:7,显卡为H20,其他和项目中给的代码保持一致,通过debug定位在model.generate()这个函数。报错Floating point exception (core dumped)