There seems to be a problem with the mixtral finetuning evaluations

#491
by DavidGF - opened

I have seen that some mixtral finetunes have failed the evaluation.
This also includes our finetunes. Does anyone know why this could be?
Currently almost all successfully evaluated mixtral models have the sliding window set or had it set at the time of evaluation, which is actually not correct, but could it be because of that?

Regards,
David

@DavidGF I was wondering why Mixtrals weren't showing up on the leaderboard. Out of curiosity can you provide links to some that failed?

@DavidGF Thanks! That list includes a couple I was waiting on, especially Dolphin.

Hugging Face H4 org

Hi!
Thanks for the detailed report!
We changed the cluster on which the leaderboard backend is running and it would appear it has problems connecting to the hub - we are investigating and going to fix asap

Hugging Face H4 org

Hi ! Some models failed while we migrated the leaderboard backend. I will requeue all those models. Thanks for the notice :)

SaylorTwift changed discussion status to closed
DavidGF changed discussion status to open
Hugging Face H4 org

Hi,
FYI, the new cluster is having strong connectivity problems, we are putting all evals on hold til it's fixed, and we'll relaunch all FAILED evals of the past 2 days

OpenPipe/mistral-ft-optimized-1218
cognitivecomputations/dolphin-2.6-mixtral-8x7b

Those models have issues getting into the evaluation que as well.

Thanks for fixing!

DavidGF changed discussion status to closed

Sign up or log in to comment