Some Issues Regarding Bugs in 0-1B Model Rankings

#750
by lingyun1 - opened

There is a 20B-parameter model Writer/palmyra-large and a 7B-parameter model LLM360/Amber included in the 0-1B ranking list. Could you please move them to the ranking lists corresponding to their parameters? I would be very grateful.

I have also noticed this issue; it is unfair to other SLM!

微信图片_20240522160610.png

Open LLM Leaderboard org

Hi! I think those are models for which we did not manage to extract the number of parameters, hence it's been set to 0, can you try increasing the lower bound to 0.01?

Hi! I think those are models for which we did not manage to extract the number of parameters, hence it's been set to 0, can you try increasing the lower bound to 0.01?

Thanks for your reply, it really works

Open LLM Leaderboard org

Cool, closing!

clefourrier changed discussion status to closed

Sign up or log in to comment