Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: openrail
|
|
13 |
**OVDEval** is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called **Non-Maximum Suppression Average Precision (NMS-AP)** to address this issue.
|
14 |
|
15 |
|
16 |
-
|
17 |
|
18 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a2e94991d8e7fb24f7688/ngOkek9wJdppyxPB0xZ8Q.png)
|
19 |
|
@@ -65,7 +65,7 @@ license: openrail
|
|
65 |
|
66 |
```
|
67 |
|
68 |
-
|
69 |
|
70 |
Reference https://github.com/om-ai-lab/OVDEval
|
71 |
|
|
|
13 |
**OVDEval** is a new benchmark for OVD model, which includes 9 sub-tasks and introduces evaluations on commonsense knowledge, attribute understanding, position understanding, object relation comprehension, and more. The dataset is meticulously created to provide hard negatives that challenge models' true understanding of visual and linguistic input. Additionally, we identify a problem with the popular Average Precision (AP) metric when benchmarking models on these fine-grained label datasets and propose a new metric called **Non-Maximum Suppression Average Precision (NMS-AP)** to address this issue.
|
14 |
|
15 |
|
16 |
+
## Data Details
|
17 |
|
18 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a2e94991d8e7fb24f7688/ngOkek9wJdppyxPB0xZ8Q.png)
|
19 |
|
|
|
65 |
|
66 |
```
|
67 |
|
68 |
+
## How to use it
|
69 |
|
70 |
Reference https://github.com/om-ai-lab/OVDEval
|
71 |
|