Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision

Differences

#1
by Unknown37 - opened

What are the differences between Wizard Mega 13B by Openaccess-AI-Collective, GGML, and this model GPTQ?

GGML is quantised for CPU-based inference (and now also supports acceleration from a CUDA GPU), from C++ based clients
GPTQ is quantised for GPU-based inference, from Python code
The base repo is float16, also for GPU based inference but requires a lot more VRAM.

So now with CUDA acceleration, GGML should be faster due to C++ clients, even llama-cpp-python and ctransformers, compared to exllama?

No, ExLlama is still the performance king.

GGML with full CUDA acceleration is fast, much faster than it used to be. But ExLlama still outperforms it. For example on a 7B model with 4090 GPU and good CPU you will be able to get 100-120 tokens/s with GGML, but ExLlama will do 140 - 170 tokens/s.

Sign up or log in to comment