⁉️ FAQ - Start here before opening an issue ⁉️

#179
by clefourrier HF staff - opened
Open LLM Leaderboard org
edited Nov 28, 2023

Hi! Thank you for your interest in the Open LLM Leaderboard!
Below are some common questions - if this FAQ does not answer what you need, feel free to create a new issue, and we'll take care of it as soon as we can!

Submissions

  • My model requires trust_remote_code=True, can I submit it?
    We only support models that have been integrated in a stable version of the transformers library for automatic submission, as we don't want to run just any kind of code on our cluster.

  • What about models of type X?
    We only support models that have been integrated in a stable version of the transformers library for automatic submission.

Model evaluation

  • My model disappeared from all the queues, what happened?
    A model disappearing from all the queues usually means that there has been a failure. You can check if that is the case by looking at your model here.

  • What causes an evaluation failure?
    Most of the failures we get come from problems in the submissions (corrupted files, config problems, wrong parameters selected for eval ...), so we'll be grateful if you first make sure you have followed the steps in About. However, from time to time, we have failures on our side (hardware/node failures, problem with an update of our backend, connectivity problem ending up in the results not being saved, ...).

  • How can I report an evaluation failure?
    As we store the logs for all models, feel free to create an issue, where you link to the requests file of your model (look for it here), so we can investigate! If the model failed due to a problem on our side, we'll relaunch it right away!
    Note: Please do not re-upload your model under a different name, it will not help

Model results

  • What kind of information can I find?
    Let's imagine you are interested in the Yi-34B results. You have access to 3 different information categories:
    The request file: it gives you information about the status of the evaluation
    The aggregated results folder: it gives you aggregated scores, per experimental run
    The details dataset: it gives you the full details (scores and examples for each task and a given model)

Editing model information

  • I upgraded my model and want to re-submit, how can I do that?
    Please open an issue with the precise name of your model, and we'll remove your model from the leaderboard so you can resubmit. You can also resubmit directly with the new commit hash!

  • I need to rename my model, how can I do that?
    You can use @Weyaxi 's super cool tool to request model name changes, then open a discussion where you link to the created pull request, and we'll check them and merge them as needed.

Leaderboard display

  • The leaderboard has crashed with a connection error, help!
    This happens from time to time, and is normal, don't worry. The leaderboard will be automatically restarted in less than an hour (or earlier if one of the maintainers notices it). Please only open an issue if the leaderboard is down for longer than an hour.

  • Why do models appear several times in the leaderboard?
    We run evaluations with user selected precision and model commit. Sometimes, users submit specific models at different commits and at different precisions (for example, in float16 and 4bit to see how quantization affects performance). You should be able to verify this by displaying the precision and model sha columns in the display. If, however, you see models appearing several time with the same precision and hash commit, this is not normal.

  • What is this concept of "flagging"?
    This mechanism allows user to report models that have unfair performance on the leaderboard. This contains several categories: exceedingly good results on the leaderboard because the model was (maybe accidentally) trained on the evaluation data, models that are copy of other models not atrributed properly, etc.

  • My model has been flagged improperly, what can I do?
    Every flagged model has a discussion associated with it - feel free to plead your case there, and we'll see what to do together with the community.

Misc

  • Why don't you display closed source model scores?
    This is a leaderboard for Open models, both for philosophical reasons (openness is cool) and for practical reasons: we want to ensure that the results we display are accurate and reproducible, but 1) commercial closed models can change their API thus rendering any scoring at a given time incorrect 2) we re-run everything on our cluster to ensure all models are run on the same setup and you can't do that for these models.

  • I want to discuss model results and debate about them!
    We have a discussion especially for this here! Have fun :)

  • I have an issue about accessing the leaderboard through the Gradio API
    Since this is not the recommended way to access the leaderboard, we won't provide support for this, but you can look at tools provided by the community for inspiration!

clefourrier pinned discussion
clefourrier changed discussion title from FAQ - start here before opening an issue to FAQ ⁉️ - start here before opening an issue
clefourrier changed discussion title from FAQ ⁉️ - start here before opening an issue to ⁉️ FAQ ⁉️ - start here before opening an issue
This comment has been hidden
This comment has been hidden
This comment has been hidden
This comment has been hidden
deleted
This comment has been hidden
clefourrier changed discussion title from ⁉️ FAQ ⁉️ - start here before opening an issue to ⁉️ FAQ - Start here before opening an issue ⁉️
Open LLM Leaderboard org
edited Oct 27, 2023

Hi! This discussion is to store the FAQ only, please open an issue if you need help or have a request.

Open LLM Leaderboard org

This discussion has been moved to the About tab of the leaderboard

clefourrier changed discussion status to closed

Sign up or log in to comment