Deployed For Evaluation Still Not On Leaderboard

#497
by vikash06 - opened

Hi,

I deployed vikash06/llama-2-7b-small-model-new for evaluation almost 12 hours ago. I am not able to see it on the leaderboard have tried the filters (flagged models). It is not visible even in queued. It's not failed as I had tested with the code on submit page:

from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)

Apparently they're working on evals.

"FYI, the new cluster is having strong connectivity problems, we are putting all evals on hold til it's fixed, and we'll relaunch all FAILED evals of the past 2 days" -

https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/491

Hugging Face H4 org

Hi ! The connectivity issues on the cluster have been fixed, and your model should be on the leaderboard :)
Don't hesitate to re-open the issue if your model failed.

SaylorTwift changed discussion status to closed

Sign up or log in to comment