fail to run the example

#4
by Leymore - opened

first, model_path is not defined ,

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory)

should be

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory)

then this error comes up:

  File "/cpfs01/user/zhoufengzhe/anaconda3/envs/lmdeploy/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
    return model_class.from_pretrained(
  File "/cpfs01/user/zhoufengzhe/anaconda3/envs/lmdeploy/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3550, in from_pretrained
    model = cls(config, *model_args, **model_kwargs)
  File "/root/.cache/huggingface/modules/transformers_modules/deepseek-ai/DeepSeek-V2-Chat/2c190bb36705793d723008b0c1fa3a691c227485/modeling_deepseek.py", line 1588, in __init__
    self.model = DeepseekV2Model(config)
  File "/root/.cache/huggingface/modules/transformers_modules/deepseek-ai/DeepSeek-V2-Chat/2c190bb36705793d723008b0c1fa3a691c227485/modeling_deepseek.py", line 1404, in __init__
    [
  File "/root/.cache/huggingface/modules/transformers_modules/deepseek-ai/DeepSeek-V2-Chat/2c190bb36705793d723008b0c1fa3a691c227485/modeling_deepseek.py", line 1405, in <listcomp>
    DeepseekV2DecoderLayer(config, layer_idx)
  File "/root/.cache/huggingface/modules/transformers_modules/deepseek-ai/DeepSeek-V2-Chat/2c190bb36705793d723008b0c1fa3a691c227485/modeling_deepseek.py", line 1187, in __init__
    self.self_attn = ATTENTION_CLASSES[config._attn_implementation](
KeyError: 'sdpa'

added attn_implementation argument:

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="auto", attn_implementation='flash_attention_2', torch_dtype=torch.bfloat16, max_memory=max_memory)

it will take about 0.5 hour to load the model, and this error comes up:

  File "/cpfs01/user/zhoufengzhe/anaconda3/envs/lmdeploy/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
    return model_class.from_pretrained(
  File "/cpfs01/user/zhoufengzhe/anaconda3/envs/lmdeploy/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3735, in from_pretrained
    dispatch_model(model, **device_map_kwargs)
  File "/cpfs01/user/zhoufengzhe/anaconda3/envs/lmdeploy/lib/python3.10/site-packages/accelerate/big_modeling.py", line 364, in dispatch_model
    weights_map = OffloadedWeightsLoader(
  File "/cpfs01/user/zhoufengzhe/anaconda3/envs/lmdeploy/lib/python3.10/site-packages/accelerate/utils/offload.py", line 150, in __init__
    raise ValueError("Need either a `state_dict` or a `save_folder` containing offloaded weights.")

i am not sure the offload scheme should be activated.

anyway, has anyone run the model successfully?

btw, i am using 8 * 80G A100

DeepSeek org

Thanks for pointing out the typo and it's fixed now. Regarding sdpa attention implementation in HuggingFace, we've had to remove it due to some issues. You might want to try our default eager implementation instead.

Moreover, the HuggingFace's code is not as efficient as we would like, so we're developing a new open-source code using vLLM for better performance.

when I can change the attn_implementation to eager with

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation='eager')

still has the error

ValueError: Need either a state_dict or a save_folder containing offloaded weights

@msr2000

I had meet the same problem when loading deekseek-v2:

File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/offload.py", line 150, in init
raise ValueError("Need either a state_dict or a save_folder containing offloaded weights.")
ValueError: Need either a state_dict or a save_folder containing offloaded weights.`

DeepSeek org

The HuggingFace code and examples have been updated recently. Note that the line of code for loading model should be replaced with:

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")

Please try again.

I had try this:
replace

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory)

to

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory, offload_folder="save_folder")

and it's work.

DeepSeek org

I had try this:
replace

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory)

to

model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True, device_map="auto", torch_dtype=torch.bfloat16, max_memory=max_memory, offload_folder="save_folder")

and it's work.

Regarding the issue with accelerate library in GPU memory computation, it's important to note that offloading is actually unnecessary when there is ample GPU memory available (80GB * 8). Please refer to my previous response for more details.

with

model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, device_map="sequential", torch_dtype=torch.bfloat16, max_memory=max_memory, attn_implementation="eager")

i can run successfully, cheers~

note device_map="sequential"

Sign up or log in to comment