RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED

#79
by ConorVanek - opened

Getting the following error when I try to copy/paste and run the model using the code on the model card:

Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:19<00:00, 9.83s/it]
/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/generation/utils.py:1417: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(
Setting pad_token_id to eos_token_id:11 for open-end generation.
Traceback (most recent call last):
File "/home/anthony/huggingface/main.py", line 16, in
sequences = pipeline(
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/pipelines/text_generation.py", line 205, in call
return super().call(text_inputs, **kwargs)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1140, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1147, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/pipelines/text_generation.py", line 268, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/generation/utils.py", line 1648, in generate
return self.sample(
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/transformers/generation/utils.py", line 2730, in sample
outputs = self(
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/anthony/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b/f7796529e36b2d49094450fb038cc7c4c86afa44/modelling_RW.py", line 753, in forward
transformer_outputs = self.transformer(
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anthony/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b/f7796529e36b2d49094450fb038cc7c4c86afa44/modelling_RW.py", line 648, in forward
outputs = block(
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/anthony/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b/f7796529e36b2d49094450fb038cc7c4c86afa44/modelling_RW.py", line 385, in forward
attn_outputs = self.self_attention(
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anthony/anaconda3/envs/huggingface/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/anthony/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b/f7796529e36b2d49094450fb038cc7c4c86afa44/modelling_RW.py", line 279, in forward
attn_output = F.scaled_dot_product_attention(
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasGemmStridedBatchedExFix(handle, opa, opb, (int)m, (int)n, (int)k, (void*)&falpha, a, CUDA_R_16BF, (int)lda, stridea, b, CUDA_R_16BF, (int)ldb, strideb, (void*)&fbeta, c, CUDA_R_16BF, (int)ldc, stridec, (int)num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)

I am using an Anaconda environment on Ubuntu 22.04.3 LTS and here are the packages I have installed:

Package Version


accelerate 0.22.0
accelerator 2023.7.18.dev1
bottle 0.12.25
certifi 2023.7.22
charset-normalizer 3.2.0
cmake 3.27.4.1
einops 0.6.1
filelock 3.12.3
fsspec 2023.9.0
huggingface-hub 0.16.4
idna 3.4
inquirerpy 0.3.4
Jinja2 3.1.2
lit 16.0.6
MarkupSafe 2.1.3
mpmath 1.3.0
networkx 3.1
numpy 1.25.2
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
packaging 23.1
pfzy 0.3.4
Pillow 10.0.0
pip 23.2.1
prompt-toolkit 3.0.39
psutil 5.9.5
PyYAML 6.0.1
regex 2023.8.8
requests 2.31.0
safetensors 0.3.3
setproctitle 1.3.2
setuptools 68.0.0
sympy 1.12
tokenizers 0.13.3
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
tqdm 4.66.1
transformers 4.34.0.dev0
triton 2.0.0
typing_extensions 4.7.1
urllib3 2.0.4
waitress 2.1.2
wcwidth 0.2.6
wheel 0.38.4

Any help would be greatly appreciated thank you.

Sign up or log in to comment