When i use this for tokenizer and Autoprocessor, I run into cuda error in transformers Library.
#1
by
solankibhargav
- opened
Hi,
Hope you are well. great job with LLava btw.
The following is the error i run into.
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
this happens only for 34 b model. The 7b models work fine. Any clue why this happening?
I tried the weights of 34b using LLAVA code repo and there it works fine, but when I use transformers library, i cant load the 34b model.
I also tried the preprocessor.config to look like mistral config and still the same problem.
Thanks a lot for your help.
Best