Evaluation process and missing benchmark
Thanks for putting together this effort, it is of great interest to our group.
I was curious about the evaluation process, if this is expected to be automated, or of there are any limits on model sizes.
Also, the model we submitted first https://huggingface.co/IBI-CAAI/MELT-Mixtral-8x7B-Instruct-v0.1 has not shown up on the benchmarks, but models we submitted a day later have. I submitted again, but it says the submission has already been made. Is there a reason for this?
Same issue for internistai/base-7b-v0.2
It doesn't show in the results but it says it was already submitted
Same issue for two models: starmpcc/Asclepius-Llama2-13B and starmpcc/Asclepius-Llama2-7B
I submitted them a few days ago, but they didn't seem to be pending in the queue.
There are far more submitted results than there are models on the board. How are you determining what gets posted?