How to submit models to the leaderboard?

#1
by lewtun HF staff - opened

Hello, congrats on the excellent NexusRaven model and this neat evaluation benchmark! I'm curious if you plan to allow community submissions to the leaderboard and if yes, how one should do that?

It would be quite interesting to get quantitative results on models like OpenHermes 2.5 which were fine-tuned on function-calling data and perform quite well in qualitative testing

cc @teknium

Hi Lewis! Good question and feedback, we are updating the leaderboard with instructions in a bit. Yes, we would love to add in more models (the more the better!), and are open to feedback on what would be the best way to accept submissions.

Our current plan is to accept notebook submissions via a PR to the NexusRaven-V2 GitHub that showcases the evaluation process on these datasets, and we can update the leaderboard periodically with new submissions after we rerun the notebook on our end (and further test on the heldout set).

Hey folks, just a small update on this from our side. We're still working on it! Hopefully we will put something out very soon :)

Any updates?

Sign up or log in to comment