Alternate quantizations.
#4
by
ZeroWw
- opened
These are my own quantizations (updated almost daily).
The difference with normal quantizations is that I quantize the output and embed tensors to f16.
and the other tensors to 15_k,q6_k or q8_0.
This creates models that are little or not degraded at all and have a smaller size. They run at about 3-6 t/sec on CPU only using llama.cpp
And obviously faster on computers with potent GPUs
https://huggingface.co/ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF