Latest results from eval runs are not updated on Leaderboard/Content repo

#847
by pankajmathur - opened

Hello Open LLM LB Team,

Thanks for taking time to read this discussion, Could you please check why the Latest results from some of model eval runs are not updated on Leaderboard/Content repo?

The https://huggingface.co/datasets/open-llm-leaderboard/contents doesn't show any of below models eval runs which can be found below:

https://huggingface.co/datasets/open-llm-leaderboard/pankajmathur__orca_mini_v7_72b-details
https://huggingface.co/datasets/open-llm-leaderboard/pankajmathur__orca_mini_v2_7b-details
https://huggingface.co/datasets/open-llm-leaderboard/pankajmathur__orca_mini_v6_8b_dpo-details
https://huggingface.co/datasets/open-llm-leaderboard/pankajmathur__orca_mini_v5_8b_dpo-details
https://huggingface.co/datasets/open-llm-leaderboard/pankajmathur__Al_Dente_v1_8b-details

P.S.: They were all finished within 24 hour window, not sure if that's the reason.

Regards,
Pankaj

Open LLM Leaderboard org

Hi @pankajmathur ,

It was a bug from our side, but here are your models, you can check them on the Leaderboard now! ๐Ÿค—
Screenshot 2024-07-18 at 16.59.31.png

alozowski changed discussion status to closed

Sign up or log in to comment