Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
chiayewken commited on
Commit
cc5c866
1 Parent(s): fa2bb3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md CHANGED
@@ -29,3 +29,81 @@ configs:
29
  - split: train
30
  path: data/train-*
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - split: train
30
  path: data/train-*
31
  ---
32
+
33
+ [Paper](https://arxiv.org/abs/2403.13315) | [Code](https://github.com/declare-lab/LLM-PuzzleTest/tree/master/PuzzleVQA) | [Dataset](https://huggingface.co/datasets/declare-lab/puzzlevqa)
34
+
35
+ ### About
36
+
37
+ Large multimodal models extend the impressive capabilities of large language models by integrating multimodal
38
+ understanding abilities. However, it is not clear how they can emulate the general intelligence and reasoning ability of
39
+ humans. As recognizing patterns and abstracting concepts are key to general intelligence, we introduce PuzzleVQA, a
40
+ collection of puzzles based on abstract patterns. With this dataset, we evaluate large multimodal models with abstract
41
+ patterns based on fundamental concepts, including colors, numbers, sizes, and shapes. Through our experiments on
42
+ state-of-the-art large multimodal models, we find that they are not able to generalize well to simple abstract patterns.
43
+ Notably, even GPT-4V cannot solve more than half of the puzzles. To diagnose the reasoning challenges in large
44
+ multimodal models, we progressively guide the models with our ground truth reasoning explanations for visual perception,
45
+ inductive reasoning, and deductive reasoning. Our systematic analysis finds that the main bottlenecks of GPT-4V are
46
+ weaker visual perception and inductive reasoning abilities. Through this work, we hope to shed light on the limitations
47
+ of large multimodal models and how they can better emulate human cognitive processes in the future.
48
+
49
+ ### Example Puzzle
50
+
51
+ The figure below shows an example question which involves the color concept in PuzzleVQA, and an incorrect answer from
52
+ GPT-4V. There are generally three stages that can be observed in the solving process: visual perception (blue),
53
+ inductive reasoning (green), and deductive reasoning (red). Here, the visual perception was incomplete, causing a
54
+ mistake during deductive reasoning.
55
+
56
+ ![](images/example.png)
57
+
58
+ ### Puzzle Components
59
+
60
+ The figure below shows an illustration example of components (top) and reasoning explanations (bottom) for abstract
61
+ puzzles in PuzzleVQA. To construct each puzzle instance, we first define the layout and pattern of a multimodal
62
+ template, and populate the
63
+ template with suitable objects that demonstrate the underlying pattern. For interpretability, we also construct ground
64
+ truth reasoning explanations to interpret the puzzle and explain the general solution stages.
65
+
66
+ ![](images/components.png)
67
+
68
+ ### Puzzle Taxonomy
69
+
70
+ The figure below shows the taxonomy of abstract puzzles in PuzzleVQA with sample questions, based on fundamental
71
+ concepts
72
+ such as colors and size. To enhance diversity, we design both single-concept and dual-concept puzzles.
73
+
74
+ ![](images/taxonomy.png)
75
+
76
+ ### Evaluation Results
77
+
78
+ We report the main evaluation results on single-concept and dual-concept puzzles in Table 1 and Table 2 respectively.
79
+ The evaluation results for single-concept puzzles, as shown in Table 1 reveal notable differences in performance among
80
+ the open-source and closed-source models. GPT-4V stands out with the highest average score of 46.4, demonstrating
81
+ superior abstract pattern reasoning on single-concept puzzles such as numbers, colors, and size. It particularly excels
82
+ in the "Numbers" category with a score of 67.5, far surpassing other models, which may be due to its advantage in math
83
+ reasoning tasks (Yang et al., 2023). Claude 3 Opus follows with an overall average of 39.4, showing its strength in
84
+ the "Shapes" category with a top score of 44.5. The other models, including Gemini Pro and LLaVA-13B trail behind with
85
+ averages of 34.5 and 27.5 respectively, performing similarly to the random baseline on several categories.
86
+
87
+ In the evaluation on dual-concept puzzles, as shown in Table 2, GPT-4V stands out again with the highest average score
88
+ of 45.5. It performed particularly well in categories such as "Colors & Numbers" and "Colors & Size" with a score of
89
+ 56.0 and 55.0 respectively. Claude 3 Opus closely follows with an average of 43.7, showing strong performance in "
90
+ Numbers & Size" with the highest score of 34.0. Interestingly, LLaVA-13B, despite its lower overall average of 31.1,
91
+ scores the highest in the "Size & Shapes" category at 39.0. Gemini Pro, on the other hand, has a more balanced
92
+ performance across categories but with a slightly lower overall average of 30.1. Overall, we find that models perform
93
+ similarly on average for single-concept and dual-concept patterns, which suggests that they are able to relate multiple
94
+ concepts such as colors and numbers together.
95
+
96
+ ![](images/results.png)
97
+
98
+ ### Citation
99
+
100
+ If our work inspires your research, please cite us:
101
+
102
+ ```
103
+ @article{chia2024puzzlevqa,
104
+ title={PuzzleVQA: Diagnosing Multimodal Reasoning Challenges of Language Models with Abstract Visual Patterns},
105
+ author={Yew Ken Chia and Vernon Toh Yan Han and Deepanway Ghosal and Lidong Bing and Soujanya Poria},
106
+ journal={arXiv preprint arXiv:2403.13315},
107
+ year={2024}
108
+ }
109
+ ```