Text Generation
Transformers
PyTorch
Safetensors
English
olmo
custom_code

Submit OLMo to the leaderboard

#14
by kno10 - opened

Please submit this model to the leaderboard https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
Apparently it is not run automatically because it contains some custom executable code.
@clefourrier

Allen Institute for AI org

I think the plan is to just add it fully into Huggingface transformers, and remove all the custom executable code. I assume at this point it would run get added automatically?

Hi!
We almost never run models which require trust_remote_code=True on our cluster for safety reasons - it might happen exceptionally, but then it means my co-maintainer or myself has to read the full code of the model + dependencies before running evaluations, and we don't have the bandwidth at the moment.
However, if the model becomes natively integrated into transformers, then it will become instantly submittable on the leaderboard :)

@clefourrier CohereForAI/c4ai-command-r-v01 is another example of a hyped model that currently is not (yet) evaluated in the dashboard.
Both are probably worth the effort of checking the "remote code" (or integrating it into transformers... vLLM seems to have merged OLMo support: https://github.com/vllm-project/vllm/pull/2832 ).
On lmsys.org OLMo currently ranks on the level of Mistral-7B-Instruct-v0.1 there, but my subjective impression was that OLMo is substantially worse.

Either way, both aren't the common case of yet-another-merge, but in both cases substantial effort went into the training, and hence their performance on the leaderboard is of much higher interest than the thousand LoRA overfitting models we see these days. Hence, I'd really like to see them included in the evaluation.

@dirkgr I did not see a pull request for OLMo in the transformers library yet. Why not? It would help promoting your model if it is easier to run.

Allen Institute for AI org

Mate, we have lots of priorities, and getting OLMo into the transformers library is just one of them. We recently cleaned up the naming conventions we have in the training code to make that easier. Meanwhile, the entire stack is open source, and the HF version of the code is already in the OLMo repo. If you need this right now, you can make the PR today!

Hi @kno10 ,
We actually discussed with Cohere before the release, and since it's not a base pretrained model (it's a pretrained + SFT model), we followed our usual guidelines and did not evaluate it. We however explained how they could report numbers equivalent to the Open LLM Leaderboard's result, and as far as I know, the code is currently being integrated in transformers.

But I also agree with @dirkgr , that if you think things are missing, you should feel free to contribute! After all, that's also one of the cool things about open source :)

Sign up or log in to comment