Parameter count at 0 for incorrectly detected params is misleading

#867
by dnhkng - opened

Hi,

my model, dnhkng/RYS-Huge-bnb-4bit, is really big, so I uploaded it in 4bit for the leaderboard. However, the model selection sliders seem to be broken. i.e. when I set the minimum model size selector to zero, and leave the highest as 9B, my model is still listed, as per the picture:

Screenshot 2024-08-01 at 06.47.17.png

This puts me (undeservedly) at the top of the list. My models is about 80B params, and 55Gb, so it should not be in this subset.

Open LLM Leaderboard org

Hi!

If you display the number of parameters, you'll see 0 for your models, because we did not manage to detect it automatically from the safetensors weights most likely - so putting the range from 0.1 to 9 would hide your model.
We were considering changing the failed detected parameter number to -1. (cc @alozowski when you come back?)

clefourrier changed discussion title from Model parameter count is wrong on the leaderboard to Parameter count at 0 for incorrectly detected params is misleading

Can I correct the parameter count myself?

Also, -1 sounds like a good option for failure cases. The other option is to start the range at a meaningful value, like 0.01B. Smaller than that doesn't make sense for a LLM (even TinyStories).

Open LLM Leaderboard org

Yep, feel free to open a PR on the request files and tag me there!

I think because you restarted manually, the "request file" is still the failed version.

https://huggingface.co/datasets/open-llm-leaderboard/requests/blob/main/dnhkng/RYS-Huge-bnb-4bit_eval_request_False_4bit_Original.json

Open LLM Leaderboard org

Good catch, fixed

Sign up or log in to comment