great evals

#2
by gblazex - opened

Congrats on providing both MT-bench and ArenaHard numbers not just OpenLLM.

It would be interesting to run the new Alpaca Eval LC which is close to ArenaHard (and much better than MT-bench)

https://tatsu-lab.github.io/alpaca_eval/

Also Contributing to the leaderboard is as simple a PR of the config and result files:
https://github.com/tatsu-lab/alpaca_eval?tab=readme-ov-file#contributing-a-model

Screenshot 2024-05-09 at 13.39.18.png

Thanks for the reference @gblazex . We will look into evaluating on Alpaca Eval ✌️

Sign up or log in to comment