brunneis's picture
Update README.md
8e6a7ab verified
|
raw
history blame
1.7 kB
metadata
title: Solbench
emoji: πŸ†
colorFrom: pink
colorTo: purple
sdk: gradio
app_file: app.py
pinned: true
datasets:
  - braindao/solbench-naive-judge-random-v1
  - braindao/solbench-naive-judge-openzeppelin-v1
  - braindao/solbench-humaneval-for-solidity-v1
  - braindao/solbench-humaneval-for-solidity-v2
license: apache-2.0
sdk_version: 4.40.0
thumbnail: >-
  https://cdn-uploads.huggingface.co/production/uploads/5f19edf678d261307936f4c8/4v6TPbN8qa6JptyCFUy-J.png

Start the configuration

Most of the variables to change for a default leaderboard are in src/env.py (replace the path for your leaderboard) and src/about.py (for tasks).

Results files should have the following format and be stored as json files:

{
    "config": {
        "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
        "model_name": "path of the model on the hub: org/model",
        "model_sha": "revision on the hub",
    },
    "results": {
        "task_name": {
            "metric_name": score,
        },
        "task_name2": {
            "metric_name": score,
        }
    }
}

Request files are created automatically by this tool.

If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.

Code logic for more complex edits

You'll find

  • the main table' columns names and properties in src/display/utils.py
  • the logic to read all results and request files, then convert them in dataframe lines, in src/leaderboard/read_evals.py, and src/populate.py
  • teh logic to allow or filter submissions in src/submission/submit.py and src/submission/check_validity.py