CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasGemmStridedBatchedFx...

#20
by CalumPlays - opened

Hello all I keep getting this error everytime I run the example python file on the page.

Full log below:
~/falcon-chat$ python falcon-small.py
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:18<00:00, 9.36s/it]
The model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/generation/utils.py:1255: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)
warnings.warn(
Setting pad_token_id to eos_token_id:11 for open-end generation.
Traceback (most recent call last):
File "/home/cosmos/falcon-chat/falcon-small.py", line 16, in
sequences = pipeline(
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 201, in call
return super().call(text_inputs, **kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1119, in call
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1126, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1025, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 263, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/generation/utils.py", line 1565, in generate
return self.sample(
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/transformers/generation/utils.py", line 2612, in sample
outputs = self(
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/cosmos/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/22225c3ac76bdddc1c6c44ebea0e3109468de29f/modelling_RW.py", line 753, in forward
transformer_outputs = self.transformer(
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/cosmos/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/22225c3ac76bdddc1c6c44ebea0e3109468de29f/modelling_RW.py", line 648, in forward
outputs = block(
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/cosmos/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/22225c3ac76bdddc1c6c44ebea0e3109468de29f/modelling_RW.py", line 385, in forward
attn_outputs = self.self_attention(
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/cosmos/miniconda3/envs/ttd/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/home/cosmos/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/22225c3ac76bdddc1c6c44ebea0e3109468de29f/modelling_RW.py", line 279, in forward
attn_output = F.scaled_dot_product_attention(
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasGemmStridedBatchedExFix(handle, opa, opb, (int)m, (int)n, (int)k, (void*)&falpha, a, CUDA_R_16BF, (int)lda, stridea, b, CUDA_R_16BF, (int)ldb, strideb, (void*)&fbeta, c, CUDA_R_16BF, (int)ldc, stridec, (int)num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)

EDIT: I do have CUBLAS installed in my annaconda environment along with cuDNN and cudatoolkit

UPDATE: I fixed this by using float16 instead of bfloat16.

FalconLLM changed discussion status to closed

Changing bfloat to float worked for me too... thanks!

Sign up or log in to comment