viktorzhou commited on
Commit
437faa4
·
verified ·
1 Parent(s): 66fe2aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -12
README.md CHANGED
@@ -2,7 +2,7 @@
2
  pretty_name: PaveBench
3
  size_categories:
4
  - 10K<n<100K
5
- license: mit
6
  tags:
7
  - computer-vision
8
  - vision-language
@@ -12,19 +12,30 @@ tags:
12
  - image-classification
13
  - multimodal-learning
14
  - benchmark
 
 
 
 
15
  ---
16
 
17
  # PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis
18
 
19
- ![PaveBench Overview](fig1.png)
 
 
 
 
 
20
 
21
  ## Abstract
22
 
23
- PaveBench is a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. It supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. On the visual side, PaveBench provides large-scale annotations on real top-down pavement images and includes a curated hard-distractor subset for robustness evaluation. On the multimodal side, it introduces PaveVQA, a real-image question answering dataset supporting single-turn, multi-turn, and expert-corrected interactions, covering recognition, localization, quantitative estimation, and maintenance reasoning. :contentReference[oaicite:2]{index=2}
 
 
24
 
25
  ## About the Dataset
26
 
27
- PaveBench is built on real-world highway inspection images collected in Liaoning Province, China, using a highway inspection vehicle equipped with a high-resolution line-scan camera. The captured images are top-down orthographic pavement views, which preserve the geometric properties of distress patterns and support reliable downstream quantification. The dataset provides unified annotations for multiple pavement distress tasks and is designed to connect visual perception with interactive vision-language analysis. :contentReference[oaicite:3]{index=3}
28
 
29
  The visual subset contains **20,124** high-resolution pavement images of size **512 × 512**. It supports:
30
  - image classification
@@ -36,8 +47,7 @@ In addition, the multimodal subset, **PaveVQA**, contains **32,160** question-an
36
  - **20,100** multi-turn interactions
37
  - **2,010** error-correction pairs
38
 
39
- These question-answer pairs cover recognition, localization, quantitative estimation, severity assessment, and maintenance recommendation. :contentReference[oaicite:4]{index=4}
40
-
41
  ## Distress Categories
42
 
43
  PaveBench includes six visual categories:
@@ -48,7 +58,7 @@ PaveBench includes six visual categories:
48
  - Pothole
49
  - Negative Sample
50
 
51
- These annotations are organized through a hierarchical pipeline covering classification, detection, and segmentation. :contentReference[oaicite:5]{index=5}
52
 
53
  ## Hard Distractors
54
 
@@ -57,7 +67,7 @@ A key feature of PaveBench is its curated **hard-distractor subset**. During ann
57
  - shadows
58
  - road markings
59
 
60
- These distractors often co-occur with real pavement distress and closely resemble true distress patterns, making the benchmark more realistic and more challenging for robustness evaluation. :contentReference[oaicite:6]{index=6} :contentReference[oaicite:7]{index=7}
61
 
62
  ## PaveVQA
63
 
@@ -74,7 +84,7 @@ The questions are designed around practical pavement inspection needs, including
74
  - severity assessment
75
  - maintenance recommendation
76
 
77
- Structured metadata derived from visual annotations, such as bounding boxes, pixel area, and skeleton length, is used to support grounded and low-hallucination question answering. :contentReference[oaicite:8]{index=8}
78
 
79
  ## Dataset Statistics
80
 
@@ -85,7 +95,7 @@ According to the paper:
85
  - Four primary analysis tasks
86
  - Fourteen fine-grained VQA sub-categories
87
 
88
- PaveBench is designed to provide a unified foundation for both precise visual perception and interactive multimodal reasoning in the pavement domain. :contentReference[oaicite:10]{index=10}
89
 
90
  ## Benchmark Tasks
91
 
@@ -95,7 +105,7 @@ PaveBench supports four core tasks:
95
  3. Semantic Segmentation
96
  4. Vision-Language Question Answering
97
 
98
- It also includes an agent-augmented evaluation setting where vision-language models are combined with domain-specific tools for more reliable quantitative analysis. :contentReference[oaicite:11]{index=11} :contentReference[oaicite:12]{index=12}
99
 
100
  ## Usage
101
 
@@ -104,4 +114,9 @@ Example usage with `datasets`:
104
  ```python
105
  from datasets import load_dataset
106
 
107
- dataset = load_dataset("MML-Group/PaveBench")
 
 
 
 
 
 
2
  pretty_name: PaveBench
3
  size_categories:
4
  - 10K<n<100K
5
+ license: cc-by-nc-sa-4.0
6
  tags:
7
  - computer-vision
8
  - vision-language
 
12
  - image-classification
13
  - multimodal-learning
14
  - benchmark
15
+ task_categories:
16
+ - question-answering
17
+ language:
18
+ - en
19
  ---
20
 
21
  # PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis
22
 
23
+ ![PaveBench Overview](figures/PaveBench.png)
24
+
25
+ 📍 Data Availability: The dataset will be publicly released under the CC BY-NC-SA 4.0 license upon official acceptance of the associated paper.
26
+
27
+ Stay tuned for updates — the Hugging Face repository will be updated with download links and detailed documentation once the paper is accepted.
28
+
29
 
30
  ## Abstract
31
 
32
+ PaveBench is a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. It supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. On the visual side, PaveBench provides large-scale annotations on real top-down pavement images and includes a curated hard-distractor subset for robustness evaluation. On the multimodal side, it introduces PaveVQA, a real-image question answering dataset supporting single-turn, multi-turn, and expert-corrected interactions, covering recognition, localization, quantitative estimation, and maintenance reasoning.
33
+
34
+
35
 
36
  ## About the Dataset
37
 
38
+ PaveBench is built on real-world highway inspection images collected in Liaoning Province, China, using a highway inspection vehicle equipped with a high-resolution line-scan camera. The captured images are top-down orthographic pavement views, which preserve the geometric properties of distress patterns and support reliable downstream quantification. The dataset provides unified annotations for multiple pavement distress tasks and is designed to connect visual perception with interactive vision-language analysis.
39
 
40
  The visual subset contains **20,124** high-resolution pavement images of size **512 × 512**. It supports:
41
  - image classification
 
47
  - **20,100** multi-turn interactions
48
  - **2,010** error-correction pairs
49
 
50
+ These question-answer pairs cover recognition, localization, quantitative estimation, severity assessment, and maintenance recommendation.
 
51
  ## Distress Categories
52
 
53
  PaveBench includes six visual categories:
 
58
  - Pothole
59
  - Negative Sample
60
 
61
+ These annotations are organized through a hierarchical pipeline covering classification, detection, and segmentation.
62
 
63
  ## Hard Distractors
64
 
 
67
  - shadows
68
  - road markings
69
 
70
+ These distractors often co-occur with real pavement distress and closely resemble true distress patterns, making the benchmark more realistic and more challenging for robustness evaluation.
71
 
72
  ## PaveVQA
73
 
 
84
  - severity assessment
85
  - maintenance recommendation
86
 
87
+ Structured metadata derived from visual annotations, such as bounding boxes, pixel area, and skeleton length, is used to support grounded and low-hallucination question answering.
88
 
89
  ## Dataset Statistics
90
 
 
95
  - Four primary analysis tasks
96
  - Fourteen fine-grained VQA sub-categories
97
 
98
+ PaveBench is designed to provide a unified foundation for both precise visual perception and interactive multimodal reasoning in the pavement domain.
99
 
100
  ## Benchmark Tasks
101
 
 
105
  3. Semantic Segmentation
106
  4. Vision-Language Question Answering
107
 
108
+ It also includes an agent-augmented evaluation setting where vision-language models are combined with domain-specific tools for more reliable quantitative analysis.
109
 
110
  ## Usage
111
 
 
114
  ```python
115
  from datasets import load_dataset
116
 
117
+ dataset = load_dataset("MML-Group/PaveBench")
118
+ ```
119
+
120
+ ---
121
+ license: cc-by-nc-sa-4.0
122
+ ---