Update README.md
Browse files
README.md
CHANGED
@@ -19,4 +19,25 @@ configs:
|
|
19 |
- split: test
|
20 |
path: "n_correct/test/split.parquet"
|
21 |
---
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
- split: test
|
20 |
path: "n_correct/test/split.parquet"
|
21 |
---
|
22 |
+
# DARE
|
23 |
+
|
24 |
+
DARE (Diverse Visual Question Answering with Robustness Evaluation) is a carefully created and curated multiple-choice VQA benchmark.
|
25 |
+
DARE evaluates VLM performance on five diverse categories and includes four robustness-oriented evaluations based on the variations of:
|
26 |
+
- prompts
|
27 |
+
- the subsets of answer options
|
28 |
+
- the output format
|
29 |
+
- the number of correct answers.
|
30 |
+
|
31 |
+
The validation split of the dataset contains images, questions, answer options, and correct answers. We are not publishing the correct answers for the test split to prevent contamination.
|
32 |
+
|
33 |
+
## Load the Dataset
|
34 |
+
|
35 |
+
To use the dataset use the huggingface datasets library:
|
36 |
+
|
37 |
+
```
|
38 |
+
from datasets import load_dataset
|
39 |
+
|
40 |
+
# Load the dataset
|
41 |
+
subset = "1_correct" # Change to the subset that you want to use
|
42 |
+
dataset = load_dataset("hSterz/DARE", subset)
|
43 |
+
```
|