Model parameter numbers are incorrect

#253
by ekurtulus - opened

Currently, when the models are listed using the model sizes options, some of the 70B models are displayed alongside 7B models:

image.png

Why does this happen and is there a plan to fix this soon?

Thank you very much for all the hard work and contribution to the open source community !
Emirhan

Open LLM Leaderboard org

Hi!
Thank you for your issue, good catch!

When computing these models sizes with safetensors, the total number of parameters reported is actually around 9 or 10B parameters (you can display this column using the #Params toggle). I suppose it's due to the way quantization is done, where several post-quantization int4 numbers are stored in the same space as one float16/bfloat16 pre-quantization.
I reported this to the people working on the safetensors lib (cc: @Narsil ).

image.png

yea this messes up stats, hope this gets fixed

Actually they are stored in I32 because torch doesn't support I4/U4 : https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ?show_tensors=true
(Indeed 4bits is not a real dtype)

clefourrier changed discussion title from Listing Problem to Model parameter numbers are incorrect

@Narsil @clefourrier is there an update on this?

Related to this, I'd like to look at unquantized models. Would a button to toggle them be helpful to others?

Open LLM Leaderboard org
edited Sep 14, 2023

@pcuenq I think it could be even more helpful to have the option to display models in a given precision - that would be a super useful thing to have, feel free to add it if you have the spoons! :)

Open LLM Leaderboard org

A toggle to be able to choose model's precision has been added, you can therefore only select 4bits models to be able to compare them. Closing this issue but do not hesitate to reopen if you have any suggestion to better handle model types.

SaylorTwift changed discussion status to closed

Sign up or log in to comment