Error Log

#2
by Cmansterino - opened

Model works!
However I received the following error log while loading this in webUI:

2023-08-14 22:10:47 WARNING:Exllama kernel is not installed, reset disable_exllama to True. This may because you installed auto_gptq using a pre-build wheel on Windows, in which exllama_kernels are not compiled. To use exllama_kernels to further speedup inference, you can re-install auto_gptq from source.
2023-08-14 22:10:54 WARNING:skip module injection for FusedLlamaMLPForQuantizedModel not support integrate without triton yet.

Hi, this is not an issue of the model. This is an issue of your installation of webUI.

Are you using AMD on Linux? If that's the case I can help, otherwise, please report this to text generation webUI Github.

Thanks for responding!
I am using NVIDIA and Windows so, I will check on the WebUI GIT as they probably have an example of the same error and troubleshooting.
I was very satisfied with AMD in the past and might consider them for my next GPU.

Sign up or log in to comment