Model producing no output and running forever

#3
by rpeinl - opened

Hi there @TheBloke
I ran the model on JupyterHub with auto-gptq installed with
pip install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
The model loads into GPU memory as seen with nvidia-smi
However, after running the code for inferencing the model is running for minutes without producing any output at all. Also no error message.
When I run the original 16bit model, it takes 8 seconds to produce some python code on my 20GB slice of an A100.
Any ideas?
I also tried the AWQ model, but I failed to get that working either. This time with an error
AssertionError: AWQ kernels could not be loaded.
although I followed the steps on the model card and installed AWQ from github.
Regards
René

@rpeinl are you using the correct prompt format?

As I said, the FP16 version of the model is running fine. I tried both the prompt template from the model card from TheBloke as well as that from the original models model card. Both do not work with the quantized model.

So the AWQ error was a problem with my environment. I was able to fix that. However, afterwards, the AWQ model still does only produce empty outputs.

Sign up or log in to comment