Demo inference speed

#3
by maor121 - opened

Hi,

First I want to thank you for this work, the model performance is quite impressive.
Secondly, I am trying to setup it on my machine, CPU i7, 32GB RAM, NVIDIA GTX 1080 Ti (11GB RAM).

Using GPU, 1 infenrece takes around 20-30 seconds.
However I noticed on your demo here: https://huggingface.co/spaces/dicta-il/dictalm2.0-instruct-demo
Inference takes < 1 second, and at the top it says the demo runs on CPU & RAM only (no GPU).
I have tried with CPU as well, but it is significantly slower then on GPU.

How is the demo able to achieve such speed? What am I missing?
I thought maybe the demo uses the quantized version, but it has a link leading here, to the full precision, not quantized model.

Thanks

DICTA: The Israel Center for Text Analysis org

The model is loaded on a separate server, the demo just sends a request to the server which is why it only requires a CPU.
For running on a GTX 1080 I recommend using the AWQ quantized model here which will fit on the 11GB of VRAM.

Hi @Shaltiel ,
Should this model work on Windows (https://huggingface.co/dicta-il/dictalm2.0-instruct-AWQ/discussions/1)?

I am trying to import:
import intel_extension_for_pytorch as ipex # intel-extension-for-pytorch

the error is:
"RuntimeError: GPU is required to run AWQ quantized model. You can use IPEX version AWQ if you have an Intel CPU"

Then I have read and found that this package (intel-extension-for-pytorch) is working only on Linux. I even went with it further and installed virtual box. Where I run Ubuntu 24.04. But I get the same error. It might be something with the versions (that should be compatible to my exact CPU).

So my main questions are: should this model work on Windows? And where did I go wrong?

Best,
Eli

DICTA: The Israel Center for Text Analysis org

Hi Eli,

The AWQ quantization format requires using the auto-awq library (see instructions on the main page) to run on GPU.
For running on windows on CPU, I recommend using llama.cpp or LM Studio - see here: https://huggingface.co/dicta-il/dictalm2.0-instruct-GGUF

Hi @Shaltiel ,
Thanks a lot for your prompt reply. However, I have issues with that model over there too... I pasted my question in the correct forum: https://huggingface.co/dicta-il/dictalm2.0-instruct-GGUF/discussions/1
Help would be highly appreciated.

Best,
Eli Borodach

Sign up or log in to comment