Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update src/display/about.py
#4
by
zodiache
- opened
- src/display/about.py +4 -3
src/display/about.py
CHANGED
@@ -42,9 +42,10 @@ For all these evaluations, a higher score is a better score.
|
|
42 |
- You can find details on the input/outputs for the models in the `details` of each model, that you can access by clicking the π emoji after the model name
|
43 |
|
44 |
# Reproducibility
|
45 |
-
|
46 |
-
|
47 |
-
|
|
|
48 |
"""
|
49 |
|
50 |
FAQ_TEXT = """
|
|
|
42 |
- You can find details on the input/outputs for the models in the `details` of each model, that you can access by clicking the π emoji after the model name
|
43 |
|
44 |
# Reproducibility
|
45 |
+
To reproduce our results, here is the commands you can run, using [this script](https://huggingface.co/spaces/hallucinations-leaderboard/leaderboard/blob/main/backend-cli.py):
|
46 |
+
`python backend-cli.py`
|
47 |
+
The total batch size we get for models which fit on one A100 node is 8 (8 GPUs * 1). If you don't use parallelism, adapt your batch size to fit.
|
48 |
+
*You can expect results to vary slightly for different batch sizes because of padding.*
|
49 |
"""
|
50 |
|
51 |
FAQ_TEXT = """
|