Is this space supposed to work in tandem with Lighteval?

#5
by sadra-barikbin - opened

Hi there!
Thanks for this great space. Is it supposed to work with the format of results pushed by Lighteval to the hub? I ask this as in read_evals.py::EvalResults::init_from_json_file() it expects a config entry in the result json file which is missing there. There's config_general entry there.

Demo leaderboard with an integrated backend org

Hi!
Not necessarily, it can work with lighteval, the harness, or any kind of eval script you'd want - in which case you need to adapt the space to fit your use case.

Sign up or log in to comment