Can quantization format be added?

#11
by sgjohnson - opened

Can a column (and a filter) for quantization format be added? GGML, GGUF, etc.? And isn't GPTQ a quantization format rather than a precision? It can be any precision lower than the original, no?

Open LLM Leaderboard Archive org

Hi!
At the moment, 4bit or 8bit precision indicate that the model was quantized on the fly by our backend, using bits_and_bytes, whereas GPTQ indicates that we evaluated a model already quantized by users (and then you can check the configuration of the model to get the precision).
We'll add more info in the future, feel free to reopen this issue on the OpenLLMLeaderboard's discussion tab so we can keep track of it!
(We don't really monitor issues on this repo)

clefourrier changed discussion status to closed

Sign up or log in to comment