File size: 1,661 Bytes
b2b7f52 8320a57 7275f7f 8320a57 d4cdadb 8320a57 6fc6c75 8320a57 82b4c72 6fc6c75 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: apache-2.0
---
# Circular-based Relation Probing Evaluation (CRPE)
CRPE is a benchmark designed to quantitatively evaluate the object recognition and relation comprehension ability of models.
The evaluation is formulated as single-choice questions.
The benchmark consists of four splits:
**Existence**, **Subject**, **Predicate**, and **Object**.
The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the subject-predicate-object triplets of the scene graph separately.
Some data examples are shown below.
<img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg">
Additionally, to evaluate the dependency on language priors, we also include abnormal data in our evaluation.
These images in these abnormal data depict relation triplets that are very rare in the real world.
<img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/qKWw7Qb93OXClxI_VrCRk.jpeg">
For a robust evaluation, we adopt CircularEval as our evaluation strategy.
Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the N iterations, with N corresponding to the number of choices.
In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model.
See our [project](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details! |