nielsr HF Staff commited on
Commit
1eecbd9
·
verified ·
1 Parent(s): 947cb9c

Improve dataset card: Add abstract, detailed description, usage, update task categories and paper link

Browse files

This PR significantly enhances the dataset card for AVI-Math by incorporating rich content from the project's GitHub README.

Key updates include:
- A detailed introduction, abstract, and contributions section.
- Visual examples of the benchmark, analysis, and exploration sections with absolute image URLs.
- A "Usage" section directing users to the GitHub repository for evaluation code.
- Addition of `image-text-to-text` to the `task_categories` metadata, alongside `question-answering`, to better reflect the multimodal nature of the dataset.
- Addition of relevant tags: `uav`, `aerial-imagery`, `multimodal`, `vlm`.
- Updating the paper link to the Hugging Face paper page (`https://huggingface.co/papers/2509.10059`).
- Adding the project page link (`https://zytx121.github.io/`) to `Dataset Sources`.

Files changed (1) hide show
  1. README.md +86 -8
README.md CHANGED
@@ -1,23 +1,101 @@
1
  ---
 
 
2
  license: apache-2.0
 
 
3
  task_categories:
4
  - question-answering
5
- language:
6
- - en
7
  tags:
8
  - math
9
  - reasoning
10
- size_categories:
11
- - 1K<n<10K
 
 
12
  ---
13
 
 
14
 
15
- ## Dataset Sources
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- <!-- Provide the basic links for the model. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- - **Repository:** https://github.com/VisionXLab/avi-math
20
- - **Paper:** https://arxiv.org/abs/2509.10059
 
21
 
22
  **BibTeX:**
23
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
+ size_categories:
6
+ - 1K<n<10K
7
  task_categories:
8
  - question-answering
9
+ - image-text-to-text
 
10
  tags:
11
  - math
12
  - reasoning
13
+ - uav
14
+ - aerial-imagery
15
+ - multimodal
16
+ - vlm
17
  ---
18
 
19
+ # Multimodal Mathematical Reasoning Embedded in Aerial Vehicle Imagery: Benchmarking, Analysis, and Exploration
20
 
21
+ <p align="center">
22
+ <img src="https://github.com/VisionXLab/avi-math/blob/main/images/avi-math.png?raw=true" width=100%>
23
+ </p>
24
+
25
+ ## Abstract
26
+
27
+ Mathematical reasoning is critical for tasks such as precise distance and area computations, trajectory estimations, and spatial analysis in unmanned aerial vehicle (UAV) based remote sensing, yet current vision-language models (VLMs) have not been adequately tested in this domain. To address this gap, we introduce AVI-Math, the first benchmark to rigorously evaluate multimodal mathematical reasoning in aerial vehicle imagery, moving beyond simple counting tasks to include domain-specific knowledge in areas such as geometry, logic, and algebra. The dataset comprises 3,773 high-quality vehicle-related questions captured from UAV views, covering 6 mathematical subjects and 20 topics. The data, collected at varying altitudes and from multiple UAV angles, reflects real-world UAV scenarios, ensuring the diversity and complexity of the constructed mathematical problems. In this paper, we benchmark 14 prominent VLMs through a comprehensive evaluation and demonstrate that, despite their success on previous multimodal benchmarks, these models struggle with the reasoning tasks in AVI-Math. Our detailed analysis highlights significant limitations in the mathematical reasoning capabilities of current VLMs and suggests avenues for future research. Furthermore, we explore the use of Chain-of-Thought prompting and fine-tuning techniques, which show promise in addressing the reasoning challenges in AVI-Math. Our findings not only expose the limitations of VLMs in mathematical reasoning but also offer valuable insights for advancing UAV-based trustworthy VLMs in real-world applications.
28
+
29
+ <p align="center">
30
+ <img src="https://github.com/VisionXLab/avi-math/blob/main/images/cat.png?raw=true" width=50%>
31
+ <div style="display: inline-block; color: #999; padding: 2px;">
32
+ ARI: arithmetic, CNT: counting, ALG: algebra, STA: statistics, LOG: logic, GEO: geometry.
33
+ </div>
34
+ </p>
35
+
36
+ ---
37
+
38
+ ## Latest Updates
39
+
40
+ - **[2025.09.15]** We released the benchmark and evaluation code.
41
+ - **[2025.09.08]** Accepted by ISPRS JPRS.
42
+
43
+ ---
44
+
45
+ ## Contributions
46
+
47
+ - **Benchmark:** We introduce AVI-Math, the first multimodal benchmark for mathematical reasoning in UAV imagery, covering six subjects and real-world UAV scenarios.
48
+
49
+ - **Analysis:** We provide a comprehensive analysis, uncovering the limitations of current VLMs in mathematical reasoning and offering insights for future improvements.
50
+
51
+ - **Exploration:** We explore the potential of Chain-of-Thought prompting and fine-tuning techniques to enhance VLM performance, providing a 215k-sample instruction set for VLMs to learn domain-specific knowledge in UAV scenarios.
52
 
53
+ ---
54
+
55
+ ## Benchmark
56
+
57
+ Examples of six mathematical reasoning subjects in AVI-Math.
58
+
59
+ <p align="center">
60
+ <img src="https://github.com/VisionXLab/avi-math/blob/main/images/bench1.png?raw=true" width=100%>
61
+ </p>
62
+ <p align="center">
63
+ <img src="https://github.com/VisionXLab/avi-math/blob/main/images/bench2.png?raw=true" width=100%>
64
+ </p>
65
+
66
+ Please download the [dataset](https://huggingface.co/datasets/erenzhou/AVI-Math) first and then refer to the code in the evaluation to infer and evaluate the score.
67
+
68
+ ---
69
+
70
+ ## Analysis
71
+
72
+ Accuracy scores on the AVI-Math. AVG: average accuracy of the six subjects. FRE: free-form question, CHO: multiple choice question, T/F: true or false question. The highest scores among models in each part and overall are highlighted in blue and red. The table exclusively employs the original model weights without fine-tuning.
73
+
74
+ <p align="center">
75
+ <img src="https://github.com/VisionXLab/avi-math/blob/main/images/analysis.png?raw=true" width=100%>
76
+ </p>
77
+
78
+ ---
79
+
80
+ ## Exploration
81
+
82
+ Chain-of-Thought and fine-tuning results on various VLMs.
83
+
84
+ <p align="center">
85
+ <img src="https://github.com/VisionXLab/avi-math/blob/main/images/explore.png?raw=true" width=100%>
86
+ </p>
87
+
88
+ ---
89
+
90
+ ## Usage
91
+
92
+ The dataset can be downloaded from Hugging Face. For evaluation and to infer and evaluate scores using the dataset, please refer to the code provided in the [official GitHub repository](https://github.com/VisionXLab/avi-math).
93
+
94
+ ## Dataset Sources
95
 
96
+ - **Paper:** [Multimodal Mathematical Reasoning Embedded in Aerial Vehicle Imagery: Benchmarking, Analysis, and Exploration](https://huggingface.co/papers/2509.10059)
97
+ - **Repository:** https://github.com/VisionXLab/avi-math
98
+ - **Project Page:** https://zytx121.github.io/
99
 
100
  **BibTeX:**
101