Spaces:
Runtime error
Runtime error
File size: 2,030 Bytes
664416c 1620bef 664416c e26fb5f 664416c 465d7de 664416c 1e23a8b 664416c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
title: 3D-POPE Leaderboard
emoji: 🥇
colorFrom: green
colorTo: indigo
sdk: gradio
sdk_version: 4.4.0
app_file: app.py
pinned: false
license: apache-2.0
---
This is a public leaderboard of the 3D-POPE benchmark for evaluating hallucinations in 3D-LLMs. The benchmark was introduced in: ["3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination"](https://arxiv.org/abs/2406.05132)
```
@misc{yang20243dgrand,
title={3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and Less Hallucination},
author={Jianing Yang and Xuweiyi Chen and Nikhil Madaan and Madhavan Iyengar and Shengyi Qian and David F. Fouhey and Joyce Chai},
year={2024},
eprint={2406.05132},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
# Start the configuration
Most of the variables to change for a default leaderboard are in `src/env.py` (replace the path for your leaderboard) and `src/about.py` (for tasks).
Results files should have the following format and be stored as json files:
```json
{
"config": {
"model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
"model_name": "path of the model on the hub: org/model",
"model_sha": "revision on the hub",
},
"results": {
"task_name": {
"metric_name": score,
},
"task_name2": {
"metric_name": score,
}
}
}
```
Request files are created automatically by this tool.
If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.
# Code logic for more complex edits
You'll find
- the main table' columns names and properties in `src/display/utils.py`
- the logic to read all results and request files, then convert them in dataframe lines, in `src/leaderboard/read_evals.py`, and `src/populate.py`
- teh logic to allow or filter submissions in `src/submission/submit.py` and `src/submission/check_validity.py` |