Spaces:
AIR-Bench
/
Running on CPU Upgrade

leaderboard / src /about.py
nan's picture
chore: clean up
097c6d1
raw
history blame
1 kB
# Your leaderboard name
TITLE = """<h1 align="center" id="space-title">AIR-Bench</h1>"""
# What does your leaderboard evaluate?
INTRODUCTION_TEXT = """
AIR-Bench: Automated Heterogeneous Information Retrieval Benchmark
"""
# Which evaluations are you running? how can people reproduce what you have?
BENCHMARKS_TEXT = f"""
## How it works
## Reproducibility
To reproduce our results, here is the commands you can run:
"""
EVALUATION_QUEUE_TEXT = """
## Some good practices before submitting a model
### 1)
### 2)
### 3)
### 4)
## In case of model failure
If your model is displayed in the `FAILED` category, its execution stopped.
Make sure you have followed the above steps first.
If everything is done, check you can launch the EleutherAIHarness on your model locally, using the above command without modifications (you can add `--limit` to limit the number of examples per task).
"""
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = r"""
"""