Seperate larger from smaller model benchmarking.

#640
by rombodawg - opened

@clefourrier (Assuming that the 70b-120b models are slowing down the leaderboard)

Can we seperate benching any models above and bellow 40b parameters? I just think its a little ridiculous that we have to wait weeks for 13 ridiculously sized models to be benched (that probably aren't even that good). When we have alot of other good models at a much smaller size being blocked from benchmarking. I dont mean seperate the leaderboard, just the compute needed to benchmark the diffrent sized models.

Open LLM Leaderboard org
edited Mar 21

Hi @rombodawg ,
Thanks for your message!
This assumption is incorrect, models are launched on their order of submission depending on available compute on our research cluster, but completely independently of model size (each model is launched on one single node), so the queue slowing down means that models from the research team start training.
I'm going to check if something specific is blocking the queue at the moment.

clefourrier changed discussion status to closed

Sign up or log in to comment