utkarsh4430 commited on
Commit
0f71988
·
verified ·
1 Parent(s): 040d8d4

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +71 -44
  2. manifest.json +26 -0
  3. test.parquet +3 -0
README.md CHANGED
@@ -1,44 +1,71 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: turncase
7
- dtype: string
8
- - name: num_turns
9
- dtype: int32
10
- - name: prompt_category
11
- dtype: string
12
- - name: eval_focus
13
- dtype: string
14
- - name: prompt
15
- dtype: string
16
- - name: golden_answer
17
- dtype: string
18
- - name: image
19
- dtype: image
20
- - name: images
21
- sequence:
22
- dtype: image
23
- - name: num_images
24
- dtype: int32
25
- - name: tool_trajectory
26
- dtype: string
27
- - name: rubrics
28
- dtype: string
29
- splits:
30
- - name: train
31
- num_bytes: 5981789093
32
- num_examples: 1204
33
- download_size: 5981789093
34
- dataset_size: 5981789093
35
- configs:
36
- - config_name: default
37
- data_files:
38
- - split: train
39
- path: data/*.parquet
40
- ---
41
-
42
- VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation. Parquet files under `data/` are auto-indexed by the Hub and power the Dataset Viewer.
43
-
44
- Paper: [BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning](https://arxiv.org/pdf/2510.12712)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VisToolBench Dataset
2
+
3
+ A benchmark dataset for evaluating vision-language models on tool-use tasks.
4
+
5
+ ## Dataset Statistics
6
+
7
+ - **Total samples**: 1204
8
+ - **Single-turn**: 603
9
+ - **Multi-turn**: 601
10
+
11
+ ## Schema
12
+
13
+ | Column | Type | Description |
14
+ |--------|------|-------------|
15
+ | `id` | string | Unique task identifier |
16
+ | `turncase` | string | Either "single-turn" or "multi-turn" |
17
+ | `num_turns` | int | Number of conversation turns (1 for single-turn) |
18
+ | `prompt_category` | string | Task category (e.g., "medical", "scientific", "general") |
19
+ | `eval_focus` | string | What aspect is being evaluated (e.g., "visual_reasoning", "tool_use") |
20
+ | `image` | Image | Preview image (so HF viewer always shows an image) |
21
+ | `turn_prompts` | List[string] | Per-turn prompts (single-turn → list of length 1) |
22
+ | `turn_images` | List[Image] | Per-turn images (single-turn → list of length 1) |
23
+ | `turn_golden_answers` | List[string] | Per-turn golden answers |
24
+ | `turn_tool_trajectories` | List[string] | Per-turn tool trajectories (JSON strings) |
25
+ | `rubrics_by_turn` | List[string] | Per-turn rubric dicts as JSON strings (includes weights + metadata) |
26
+ | `num_images` | int | Number of turn images (usually equals `num_turns`) |
27
+ | `images` | List[Image] | Alias of `turn_images` for HF viewer friendliness |
28
+ | `rubrics` | string | Convenience JSON string keyed by turn (keys like `turn_1`, `turn_2`, ...) |
29
+
30
+ ## Rubrics Format
31
+
32
+ Each rubric entry contains:
33
+ - `description`: What the rubric evaluates
34
+ - `weight`: Importance weight (1-5)
35
+ - `objective/subjective`: Whether evaluation is objective or subjective
36
+ - `explicit/implicit`: Whether the answer is explicit or implicit in the image
37
+ - `category`: List of categories (e.g., "instruction following", "truthfulness")
38
+ - `critical`: Whether this is a critical rubric ("yes"/"no")
39
+ - `final_answer`: Whether this relates to the final answer ("yes"/"no")
40
+
41
+ ## Usage
42
+
43
+ ```python
44
+ from datasets import load_dataset
45
+
46
+ # Load the dataset
47
+ ds = load_dataset("path/to/dataset")
48
+
49
+ # Access a sample
50
+ sample = ds['test'][0]
51
+ print(sample['turn_prompts']) # list[str]
52
+ print(sample['turn_images'][0]) # PIL Image (turn 1)
53
+
54
+ # Parse rubrics for turn 1
55
+ import json
56
+ turn1_rubrics = json.loads(sample['rubrics_by_turn'][0])
57
+ for rubric_id, rubric in turn1_rubrics.items():
58
+ print(f"{rubric['description']} (weight: {rubric['weight']})")
59
+ ```
60
+
61
+ ## Splits
62
+
63
+ - `test`: Full dataset (1204 samples)
64
+
65
+ ## License
66
+
67
+ [Specify license here]
68
+
69
+ ## Citation
70
+
71
+ [Add citation here]
manifest.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "single_json": "single_turn_data_corrected_with_rubrics_weights.json",
3
+ "multi_json": "multi_turn_data_corrected_with_rubrics_weights.json",
4
+ "counts": {
5
+ "single": 603,
6
+ "multi": 601,
7
+ "total": 1204
8
+ },
9
+ "columns": [
10
+ "id",
11
+ "turncase",
12
+ "num_turns",
13
+ "prompt_category",
14
+ "eval_focus",
15
+ "image",
16
+ "turn_prompts",
17
+ "turn_images",
18
+ "turn_golden_answers",
19
+ "turn_tool_trajectories",
20
+ "rubrics_by_turn",
21
+ "images",
22
+ "rubrics",
23
+ "num_images"
24
+ ],
25
+ "out_parquet": "hf_upload_final_corrected_v2/test.parquet"
26
+ }
test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:522cbe9b0ac1b99fcebd98821a0d234526078e2cc0e42c2378c5a862422f1255
3
+ size 10249225530