Falcon 40B Inference on GKE Autopilot A100 40GB

#82
by bshongwe - opened

I am trying to get Falcon 40B inference to run on a GKE using the text-generation-inference docker image.
However the model encounters the following error when loading

Error when initializing model
...
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

the run command is

text-generation-launcher --model-id tiiuae/falcon-40b-instruct --quantize bitsandbytes-nf4 --num-shard 1 --huggingface-hub-cache", "/usr/src/falcon-40b-instruct --weights-cache-override /usr/src/falcon-40b-instruct

I am using A100 40GB, as I struggle to get A100 80GB.

Is there anything particular I need to change to get it working?

Did you ever find a solution to this? I'm having the same issue on my A100 as well.

Hey. No I didn't find a solution. Instead of running it on GKE, I switched to using dedicated GCP VMs to deploy the inference end point

I was able to find a solution that may be helpful for you. Try disabling the custom kernels via the environment variable DISABLE_CUSTOM_KERNELS=true. This has been suggested in other posts on the github page for the HF text-generation-inference server with success. I'm not familiar with the inner workings of the system to know what exactly this is doing but the server appears to be running fine with this flag set.

Sign up or log in to comment