jnaiman
commited on
Commit
·
6bca8ce
1
Parent(s):
d65db10
readme
Browse files
README.md
CHANGED
|
@@ -5,14 +5,35 @@ license: apache-2.0
|
|
| 5 |
# What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets
|
| 6 |
|
| 7 |
Publication: TBD
|
|
|
|
| 8 |
|
| 9 |
This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs).
|
| 10 |
|
| 11 |
-
Data contains synthetically generated single-panel histograms images, bounding box data for titles, axis and tick labels, and data marks, and VQA question-answer pairs. The subset of data presented in the paper (`example_hist/` folder) includes both human (two annotators) and LMM (ChatGPT-5-nano) annotations.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
Overview of the [directory structure](https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/tree/main) is as follows:
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
|
|
|
| 18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
# What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets
|
| 6 |
|
| 7 |
Publication: TBD
|
| 8 |
+
GitHub Repo: TBD
|
| 9 |
|
| 10 |
This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs).
|
| 11 |
|
| 12 |
+
Data contains synthetically generated single-panel histograms images, data used to create histograms, bounding box data for titles, axis and tick labels, and data marks, and VQA question-answer pairs. The subset of data presented in the paper (`example_hist/` folder) includes both human (two annotators) and LMM (ChatGPT-5-nano) annotations.
|
| 13 |
+
|
| 14 |
+
See GitHub link for code used to create and parse the following files.
|
| 15 |
+
|
| 16 |
+
## Directory Structure
|
| 17 |
|
| 18 |
Overview of the [directory structure](https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/tree/main) is as follows:
|
| 19 |
+
- `example_hists/` -- contains img and json for a small (80 images), visually uniform set of histogram data with several questions annotated by both LMMs
|
| 20 |
+
- `example_hists_larger/` -- larger (500 images) dataset of uniform histogram images
|
| 21 |
+
- `example_hists_complex/` -- largest (100 images) dataset of histograms with a variety of distributions, shapes, colors, etc.
|
| 22 |
+
|
| 23 |
+
Paper-dataset (`example_hists/`) [directory structure](https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms/tree/main/example_hists):
|
| 24 |
+
- `LLM_outputs/` -- contains outputs from various trials using ChatGPT-5
|
| 25 |
+
- `imgs/` -- stores all images (also in `imgs.zip` file)
|
| 26 |
+
- `jsons/` -- stores JSON for bounding boxes, data used to create images, VQA data
|
| 27 |
+
- `human_and_llm_annotated_data.csv` -- contains two human annotations and two LMM annotations (gpt-5-nano, gpt-5-mini) for a subset of questions
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
## Human and LMM Annotations
|
| 31 |
+
|
| 32 |
+
|
| 33 |
|
| 34 |
+
## Citation information
|
| 35 |
|
| 36 |
+
If you use this work please cite:
|
| 37 |
+
```
|
| 38 |
+
TBD
|
| 39 |
+
```
|