No description provided.
davidlvxin changed pull request status to closed

问题:

启动chatglm2-6b-32模型,使用load_model_on_gpus加载了三张卡。在推理时出现了错误:

File "/home/nmnormal1/.cache/huggingface/modules/transformers_modules/chatglm2-6b-32k/modeling_chatglm.py", line 655, in forward
presents = torch.cat((presents, kv_cache), dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)

自己能操作的代码出错位置是:
for response, history, past_key_values in model.stream_chat(tokenizer, query, history=history,
past_key_values=past_key_values,
return_past_key_values=True):

解决:

把6b-32k的模型换成chatglm2-6b模型后,同样的代码不出错。因此怀疑是32k的modeling_chatglm.py文件有问题,可trust_remote_code的存在我也改不了diamagnetic,特来求助!!!

Sign up or log in to comment