|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Circular-based Relation Probing Evaluation (CRPE) |
|
|
|
CRPE is a benchmark designed to quantitatively evaluate the object recognition and relation comprehension ability of models. |
|
The evaluation is formulated as single-choice questions. |
|
|
|
The benchmark consists of four splits: |
|
**Existence**, **Subject**, **Predicate**, and **Object**. |
|
|
|
The **Existence** split evaluates the object recognition ability while the remaining splits are designed to evaluate the capability of relation comprehension, focusing on probing each of the elements in the relation triplets `(subject, predicate, object)` separately. |
|
Some data examples are shown below. |
|
|
|
<img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/_NKaowl2OUBAjck1XCAPm.jpeg"> |
|
|
|
Additionally, to evaluate the dependency on language priors, we also include abnormal data in our evaluation. |
|
These images in these abnormal data depict relation triplets that are very rare in the real world. |
|
|
|
<img width="800" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/619507e7b74b6c591f794340/qKWw7Qb93OXClxI_VrCRk.jpeg"> |
|
|
|
For a robust evaluation, we adopt CircularEval as our evaluation strategy. |
|
Under this setting, a question is considered as correctly answered only when the model consistently predicts the correct answer in each of the N iterations, with N corresponding to the number of choices. |
|
In each iteration, a circular shift is applied to both the choices and the answer to form a new query for the model. |
|
|
|
See our [project](https://github.com/OpenGVLab/all-seeing/all-seeing-v2) to learn more details! |