RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0

#1
by bupalinyu - opened

I just got an error below, any idea to fix it
[main-process] [2023-03-08 16:21:58,708] [INFO] [runner.py:635:report_job_message] update modelhub job message with job_id=1547002696 message={"pod": "aistudio-izpxpuje-ptjob-master-0", "type": "RuntimeError", "message": "RuntimeError:expected self and mask to be on the same device, but got mask on cpu and self on cuda:0"}
[main-process] [2023-03-08 16:21:58,876] [INFO] [runner.py:594:remote_run] Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.8/site-packages/alps/framework/runner.py", line 592, in remote_run
    ret = self.start()
  File "/root/miniconda3/lib/python3.8/site-packages/alps/framework/base.py", line 640, in start
    ret = self.run()
  File "/root/miniconda3/lib/python3.8/site-packages/alps/framework/predictor.py", line 738, in run
    self._run_single(role_num, work_num)
  File "/root/miniconda3/lib/python3.8/site-packages/alps/framework/predictor.py", line 1198, in _run_single
    records = self._do_predict(batch_input, predictor, trace_flag)
  File "/root/miniconda3/lib/python3.8/site-packages/alps/framework/predictor.py", line 1325, in _do_predict
    predict_results = predictor(feed_tensors)
  File "/root/miniconda3/lib/python3.8/site-packages/alps/framework/predictor.py", line 243, in call
    return self.predict(*args, **kwargs)
  File ".//model_predict.py", line 99, in predict
    raise e
  File ".//model_predict.py", line 80, in predict
    preds = self.model(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/workspace/base_module.py", line 64, in forward
    return self.model.generate(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/generation/utils.py", line 1652, in generate
    return self.beam_sample(
  File "/root/miniconda3/lib/python3.8/site-packages/transformers/generation/utils.py", line 3111, in beam_sample
    outputs = self(
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/glm-10b-chinese/01717dd11fb4d3a80c465eee9eb4789bc954046d/modeling_glm.py", line 902, in forward
    model_output = self.glm(input_ids, position_ids, attention_mask, mems=mems, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/glm-10b-chinese/01717dd11fb4d3a80c465eee9eb4789bc954046d/modeling_glm.py", line 783, in forward
    transformer_output = self.transformer(embeddings, position_ids, attention_mask, mems)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/glm-10b-chinese/01717dd11fb4d3a80c465eee9eb4789bc954046d/modeling_glm.py", line 554, in forward
    attention_mask = build_mask_matrix(query_length, sep, memory_length=memory_length)
  File "/root/.cache/huggingface/modules/transformers_modules/THUDM/glm-10b-chinese/01717dd11fb4d3a80c465eee9eb4789bc954046d/modeling_glm.py", line 547, in build_mask_matrix
    m = m.masked_fill(mask.unsqueeze(1).expand_as(m), 1)
RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0

RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0

这不是很明显嘛

@shellwang 我也遇到了这个问题,但是不太明白你说的意思,应该怎么解决呢。

RuntimeError: expected self and mask to be on the same device, but got mask on cpu and self on cuda:0

这不是很明显嘛

什么谜语人

解决了?各位

Sign up or log in to comment