Datasets:
Upload folder using huggingface_hub
Browse files- README.md +71 -44
- manifest.json +26 -0
- test.parquet +3 -0
README.md
CHANGED
|
@@ -1,44 +1,71 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# VisToolBench Dataset
|
| 2 |
+
|
| 3 |
+
A benchmark dataset for evaluating vision-language models on tool-use tasks.
|
| 4 |
+
|
| 5 |
+
## Dataset Statistics
|
| 6 |
+
|
| 7 |
+
- **Total samples**: 1204
|
| 8 |
+
- **Single-turn**: 603
|
| 9 |
+
- **Multi-turn**: 601
|
| 10 |
+
|
| 11 |
+
## Schema
|
| 12 |
+
|
| 13 |
+
| Column | Type | Description |
|
| 14 |
+
|--------|------|-------------|
|
| 15 |
+
| `id` | string | Unique task identifier |
|
| 16 |
+
| `turncase` | string | Either "single-turn" or "multi-turn" |
|
| 17 |
+
| `num_turns` | int | Number of conversation turns (1 for single-turn) |
|
| 18 |
+
| `prompt_category` | string | Task category (e.g., "medical", "scientific", "general") |
|
| 19 |
+
| `eval_focus` | string | What aspect is being evaluated (e.g., "visual_reasoning", "tool_use") |
|
| 20 |
+
| `image` | Image | Preview image (so HF viewer always shows an image) |
|
| 21 |
+
| `turn_prompts` | List[string] | Per-turn prompts (single-turn → list of length 1) |
|
| 22 |
+
| `turn_images` | List[Image] | Per-turn images (single-turn → list of length 1) |
|
| 23 |
+
| `turn_golden_answers` | List[string] | Per-turn golden answers |
|
| 24 |
+
| `turn_tool_trajectories` | List[string] | Per-turn tool trajectories (JSON strings) |
|
| 25 |
+
| `rubrics_by_turn` | List[string] | Per-turn rubric dicts as JSON strings (includes weights + metadata) |
|
| 26 |
+
| `num_images` | int | Number of turn images (usually equals `num_turns`) |
|
| 27 |
+
| `images` | List[Image] | Alias of `turn_images` for HF viewer friendliness |
|
| 28 |
+
| `rubrics` | string | Convenience JSON string keyed by turn (keys like `turn_1`, `turn_2`, ...) |
|
| 29 |
+
|
| 30 |
+
## Rubrics Format
|
| 31 |
+
|
| 32 |
+
Each rubric entry contains:
|
| 33 |
+
- `description`: What the rubric evaluates
|
| 34 |
+
- `weight`: Importance weight (1-5)
|
| 35 |
+
- `objective/subjective`: Whether evaluation is objective or subjective
|
| 36 |
+
- `explicit/implicit`: Whether the answer is explicit or implicit in the image
|
| 37 |
+
- `category`: List of categories (e.g., "instruction following", "truthfulness")
|
| 38 |
+
- `critical`: Whether this is a critical rubric ("yes"/"no")
|
| 39 |
+
- `final_answer`: Whether this relates to the final answer ("yes"/"no")
|
| 40 |
+
|
| 41 |
+
## Usage
|
| 42 |
+
|
| 43 |
+
```python
|
| 44 |
+
from datasets import load_dataset
|
| 45 |
+
|
| 46 |
+
# Load the dataset
|
| 47 |
+
ds = load_dataset("path/to/dataset")
|
| 48 |
+
|
| 49 |
+
# Access a sample
|
| 50 |
+
sample = ds['test'][0]
|
| 51 |
+
print(sample['turn_prompts']) # list[str]
|
| 52 |
+
print(sample['turn_images'][0]) # PIL Image (turn 1)
|
| 53 |
+
|
| 54 |
+
# Parse rubrics for turn 1
|
| 55 |
+
import json
|
| 56 |
+
turn1_rubrics = json.loads(sample['rubrics_by_turn'][0])
|
| 57 |
+
for rubric_id, rubric in turn1_rubrics.items():
|
| 58 |
+
print(f"{rubric['description']} (weight: {rubric['weight']})")
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
## Splits
|
| 62 |
+
|
| 63 |
+
- `test`: Full dataset (1204 samples)
|
| 64 |
+
|
| 65 |
+
## License
|
| 66 |
+
|
| 67 |
+
[Specify license here]
|
| 68 |
+
|
| 69 |
+
## Citation
|
| 70 |
+
|
| 71 |
+
[Add citation here]
|
manifest.json
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"single_json": "single_turn_data_corrected_with_rubrics_weights.json",
|
| 3 |
+
"multi_json": "multi_turn_data_corrected_with_rubrics_weights.json",
|
| 4 |
+
"counts": {
|
| 5 |
+
"single": 603,
|
| 6 |
+
"multi": 601,
|
| 7 |
+
"total": 1204
|
| 8 |
+
},
|
| 9 |
+
"columns": [
|
| 10 |
+
"id",
|
| 11 |
+
"turncase",
|
| 12 |
+
"num_turns",
|
| 13 |
+
"prompt_category",
|
| 14 |
+
"eval_focus",
|
| 15 |
+
"image",
|
| 16 |
+
"turn_prompts",
|
| 17 |
+
"turn_images",
|
| 18 |
+
"turn_golden_answers",
|
| 19 |
+
"turn_tool_trajectories",
|
| 20 |
+
"rubrics_by_turn",
|
| 21 |
+
"images",
|
| 22 |
+
"rubrics",
|
| 23 |
+
"num_images"
|
| 24 |
+
],
|
| 25 |
+
"out_parquet": "hf_upload_final_corrected_v2/test.parquet"
|
| 26 |
+
}
|
test.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:522cbe9b0ac1b99fcebd98821a0d234526078e2cc0e42c2378c5a862422f1255
|
| 3 |
+
size 10249225530
|