How to avoid running into memory/ storage problems associated with HF Spaces while using tiiuae/falcon-7b 0r 40b etc.,

#82
by vsrinivas - opened

Please note that I did not encounter the problems that are explained here, with many other LLM I tried. I am trying to host an app using this model (in fact I tried 40b and instruct models as well). When the container is being built, it runs into some memory/ storage related issues related to HF Spaces free account.

  1. The first problem is you get the error -"ValueError: The current device_map had weights offloaded to the disk. Please provide an offload_folder for them. Alternatively, make sure you have safetensors installed if the model you are using offers the weights in this format."

  2. So accordingly after installing 'safetensors', I tried again. Still the problem persists. So, I assume Falcon models are not safetensors (hope someone can confirm). When I pass the 'offload_folder="offload"' parameter to 'AutoModelForCausalLM.from_pretrained', it does seem to be working but runs into a memory issue, shown below, while loading checkpoint shards.

image.png

  1. While performing the above step with 40B model, it actually runs out of 50G storage space limit.

Appreciate if someone can help with some suggestions here.

You might want to load the model this way:

model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", offload_folder="offload")

That is how I have loaded the model already.

    checkpoint, device_map="auto", offload_folder="offload", trust_remote_code=True,)```
vsrinivas changed discussion status to closed
vsrinivas changed discussion status to open

Any solution to this problem please?

Hello Vinayaru , did you find a solution for the problem , i have the same error , using the same model.

Best regards
Noureddine

Sign up or log in to comment