tmp / docs /paper /doc.md
Icey444's picture
Upload docs/
18d3063 verified

Judge Dataset Documentation

Overview

The Judge dataset evaluates how well vision-language models (VLMs) can act as judges of computer vision model outputs. Each prompt presents a VLM with one or more encoded predictions and asks it to assess quality β€” via pairwise comparison, ranking, or absolute scoring. The dataset covers 15 vision tasks, 53 encoding variants, and three question types.

Dataset location: Icey444/Judge_questions_v2 on Hugging Face
v2.1 split: 5,549 items (2,855 pairwise Β· 2,647 scoring Β· 47 ranking)


1. Task Coverage

Task Category What is judged
object_detection Perception Bounding boxes (label + coordinates) for specified classes
instance_segmentation Perception Per-instance pixel masks for specified classes
semantic_segmentation Perception Per-class pixel masks across all categories
referring_segmentation Perception Mask of the region referred to by a natural-language expression
keypoint Perception 17-keypoint COCO pose skeleton per person
depth_estimation Perception Dense monocular depth map
lowlevel-deblur Restoration Image deblurring result
lowlevel-derain Restoration Image deraining result
lowlevel-desnow Restoration Image desnowing result
lowlevel-super-resolution Restoration Image super-resolution result
generation_controllable Generation Controllable image generation (control condition + prompt)
generation_editing Generation Instruction-based image editing
generation_inpainting_high_level Generation High-level inpainting (semantic fill)
generation_inpainting_low_level Generation Low-level inpainting (seamless texture fill)
generation_t2i Generation Text-to-image generation

2. Encoding Variants

Encoding variants determine how a model's prediction is presented to the judge. Each variant transforms raw annotation output (bounding boxes, masks, keypoints, etc.) into a form the VLM can read. There are two broad families: pixel (rendered image) and text (structured string). Combo encodings pair one text and one pixel form per option.

2.1 Object Detection (6 variants)

Stem Type Description
pixel_s0_m0 pixel Colored bounding boxes, no label text, overlaid on original image
pixel_s1_m0 pixel Colored boxes + label text, overlaid on original image
pixel_s1_m1 pixel Colored boxes + label text, rendered on separate black canvas
0305 combo text_xyxy coordinates + pixel_s1_m0 box image, per option
text_xyxy text {"label":…,"bbox":[x1,y1,x2,y2]} per detection
text_xywh text {"label":…,"bbox":[x,y,w,h]} per detection

2.2 Instance Segmentation (12 variants)

Sub-sampled pixel (downsampled grid, each cell = most-covered instance index):

Stem Embed Description
pixel_ss0_m0 overlay Grid overlaid on original image
pixel_ss0_m1 separate Grid on black canvas
pixel_ss1_m0_o0_l0_c0_b0 overlay … (sub-sampled text)

Original-resolution pixel (full-resolution mask rendering):

Stem Opacity Label overlay Color scheme Bbox style
pixel_ss1_m0_o0_l1_c0_b1 0.5 text labels color-by-class dashed bbox
pixel_ss1_m0_o0_l1_c1_b1 0.5 text labels color-by-instance dashed bbox
pixel_ss1_m0_o1_l1_c0_b1 1.0 text labels color-by-class dashed bbox
pixel_ss1_m1_o0_l1_c0_b1 0.5 text labels color-by-class dashed bbox, separate canvas
pixel_ss1_m0_o0_l1_c0_b0 0.5 text labels color-by-class no bbox
pixel_ss1_m0_o0_l0_c0_b1 0.5 no labels color-by-class dashed bbox

Text-only:

Stem Format
text_polygon {"instance_id":…,"label":…,"polygon":[[x,y],…]} per instance
text_rle {"instance_id":…,"label":…,"rle":{…}} COCO RLE per instance
text_matrix 2D integer grid (rows Γ— cols), each cell = instance index

Combo: 1742 β€” polygon text + color-by-instance image, per option.

2.3 Semantic Segmentation (8 variants)

Sub-sampled pixel (3): overlay, separate canvas, text sub-sample.

Original-resolution pixel (4, all opacity 0.5):

Stem Label overlay Color scheme Canvas
pixel_ss1_m0_o0_l1_c0 text labels standard palette overlay
pixel_ss1_m0_o0_l1_c1 text labels random colors overlay
pixel_ss1_m0_o1_l1_c0 text labels standard palette overlay, full opacity
pixel_ss1_m1_o0_l1_c0 text labels standard palette separate canvas
pixel_ss1_m0_o0_l0_c0 no labels standard palette overlay

Text-only:

Stem Format
text_polygon {"label":…,"polygon":[[x,y],…]} per segment
text_matrix 2D integer grid, each cell = class index

Combo: 4649 β€” sub-sample text + original-res overlay image, per option.

2.4 Referring Segmentation (11 variants)

Sub-sampled pixel (3): overlay, separate, text sub-sample.

Original-resolution pixel (5):

Stem Mask style Opacity Canvas
pixel_ss1_m0_o0_m0 filled region 0.5 overlay
pixel_ss1_m0_o0_m1 contour only 0.5 overlay
pixel_ss1_m0_o0_m2 fill + contour 0.5 overlay
pixel_ss1_m0_o1_m0 filled region 1.0 overlay
pixel_ss1_m1_o0_m0 filled region 0.5 separate canvas

Text-only:

Stem Format
text_polygon {"label":"<expression>","polygon":[[x,y],…]}
text_matrix 2D grid; legend maps index β†’ referring expression

Combo: 7080 β€” polygon text + fill+contour image, per option.

2.5 Keypoint Detection (8 variants)

Pixel:

Stem Style Color scheme Canvas
pixel_s0_c1_m0 points only color-by-instance overlay
pixel_s1_c0_m0 skeleton single color (green) overlay
pixel_s1_c1_m0 skeleton color-by-instance overlay
pixel_s1_c2_m0 skeleton color-by-body-part overlay
pixel_s1_c1_m1 skeleton color-by-instance separate canvas

Text-only:

Stem Format
text_flat_list 34 numbers [x0..x16, y0..y16] per person (COCO order)
text_part_keyed_json {"person_id":…,"keypoints":[{"name":…,"x":…,"y":…},…]}
text_coco_style 51 numbers [x,y,v]Γ—17 per person

All text formats include the note: x=0.0, y=0.0 means the keypoint was not detected or is not visible.

2.6 Depth Estimation (3 variants)

Each variant is a colormap applied to the predicted depth map:

Stem Colormap Semantics
plasma Magma/plasma Bright yellow = closest, dark purple = farthest
turbo Turbo (rainbow) Red = closest, blue = farthest
gray Grayscale Bright = closest, dark = farthest

2.7 Low-level Restoration (4 tasks Γ— 1 variant each)

Each task has a single pixel encoding: the restored output image shown alongside the degraded input.

Task Input context
lowlevel-deblur Blurry source image
lowlevel-derain Rainy source image
lowlevel-desnow Snowy source image
lowlevel-super-resolution Low-resolution source image

2.8 Image Generation (5 tasks Γ— 1 variant each)

Each task shows the generated output image(s) alongside the source context.

Task Source context shown
generation_controllable Source image (control signal + reference)
generation_editing Original image before editing
generation_inpainting_high_level Original image with masked region
generation_inpainting_low_level Original image with masked region
generation_t2i No source image (text prompt only)

3. Question Types

3.1 Pairwise Comparison

The judge sees two options (A and B) and selects the better prediction.

Structure:

[<image>]                        ← original/reference image (if available)

You are a judge to decide the quality of answers to a <task> task [based on my given image].
[Task-specific context: class(es) of interest / referring prompt / etc.]

Format of predictions: <encoding description>

Options:

A. [<image>] [text or legend]

B. [<image>] [text or legend]

<Final question>. Please answer with A or B.

Pair sampling: Within each (image_id, class-of-interest, error_type, prompt) group, pairs are drawn so that no two annotations with equal final_score are paired. Up to 10 pairs per group per task (encoding_analysis) or 50 pairs (judge_analysis).

Answer: The letter corresponding to the annotation with the higher final_score.

Final question phrasing is sampled from five paraphrases to reduce positional bias:

  • Which prediction is better?
  • Which option is a better execution of the vision task?
  • Which option would you prefer as answer to the vision task?
  • Which of the two is the better result?
  • Which option better fulfills the task?

3.2 Ranking

The judge sees N options (A through E, or fewer) and ranks them best-to-worst.

Structure:

[original image context]

You are a judge to decide the quality of answers to a <task> task.
[Task-specific context]

Format of predictions: <encoding description>

Options:

A. [<image> or text]
B. [<image> or text]
...

Rank the predictions from best to worst. Respond with the ranking as a single string
of letters only (best first, worst last). For example, BCAED.

Groups of 3–5 annotations sharing the same (image_id, class-of-interest, error_type) are ranked together. Used in judge_analysis only.

Answer: Letters ordered by descending final_score.

3.3 Scoring

The judge sees a single prediction and assigns a score from 0 to 10.

Structure:

[original image]

You are a judge to decide the quality of answers to a <task> task [based on my given image].
[Task-specific context]

Format of prediction: <encoding description>

[First image: original. Second image: encoded prediction.]   ← pixel encodings
[Prediction (text): <content>]                               ← text encodings

Score the quality of the prediction from 0 to 10.
0 = random guessing / worst, 10 = best possible.
Please answer with a single score from 0 to 10 only.

Answer: The annotation's final_score (normalized to 0–10).

Used in judge_analysis only. 20 groups Γ— 5 annotations per group Γ— stems per task.


4. Prompt Construction Standards

4.1 Role Framing

Every prompt begins with a judge role sentence tailored to the task:

Task group Intro pattern
Object detection "You are a judge to decide the quality of answers to an object detection task. The class(es) of interest is {coi}."
Instance / semantic segmentation Same pattern with respective task name and COI
Referring segmentation "… The prompt is '{expression}'." (or "The prompt is the referring expression shown in the options below." if not available at prompt-level)
Keypoint "… The task is pose estimation."
Depth estimation "… The task is depth prediction."
Low-level restoration Task-specific sentence describing the restoration goal
Generation Task-specific sentence describing the generation goal + text prompt when available

"based on my given image" is appended when the original image is included as a <image> placeholder.

4.2 Format Description

After the role sentence, the prompt includes a Format of predictions block describing the encoding so the judge knows what it is looking at:

  • Pixel encodings: describe the visual style (overlay/canvas, color scheme, opacity, label style).
  • Text encodings: describe the schema (e.g., JSON structure, coordinate conventions, grid dimensions and legend).
  • Combo encodings: each option shows its own format description inline, followed by the encoded image.
  • Generation/low-level: no format description (the prediction is a natural image); the instruction covers the task criterion instead.

4.3 Color Legends

For encodings where colors carry semantic meaning, a legend is included per option (not once globally), because different predictions may contain different classes or instances:

  • Object detection pixel: legend lists each detected class and its assigned color.
  • Instance segmentation (color-by-class): legend lists each class and color.
  • Instance segmentation (color-by-instance): no per-instance legend (color only distinguishes people; skeleton structure is self-evident).
  • Semantic segmentation: legend lists each class and color.
  • Keypoint (color-by-part): legend lists each of the 17 COCO keypoint names and its color.
  • Keypoint (color-by-instance): one sentence describing that all keypoints and links of the same person share a color; no per-person list.
  • Keypoint (same color): "All keypoints and links use a single color (green). No color legend."
  • Depth colormaps: the colormap semantics (which end is near/far) are described in the format block.

4.4 Image Placeholder Ordering

<image> placeholders in the prompt correspond to media entries in the same order:

  1. Original/reference image (first, when present) β€” always original_{image_id}.png.
  2. Option A image (prediction rendered for annotation A).
  3. Option B image (prediction rendered for annotation B).

Text-only encodings include only the original image (1 image total). Generation/low-level pixel encodings that have no source image in the image index include 2 images (A and B only). Generation tasks with a retrievable source image include 3 images.

4.5 Subset Labels

Each item carries a subset field indicating which run produced it:

Subset Stems used Samples Pairs Scoring/Ranking
encoding_analysis All is_base_ablation=1 (51 regular + gen/lowlevel) 10 per task 10 per task No
judge_analysis is_final=1 (53) 20 per task 50 per task Yes (20 groups Γ— 5 annotations)

Both runs use seed=42. The v2.1 JSON is the deduplicated union (keyed on task + encoding + question_type + annotation IDs).


5. Data Sources

Task group Annotation source Image source
Perception tasks data_json_v2/*_annotations.json images_v2/*.json β†’ local files or HF URLs
Low-level restoration data_json_v2/lowlevel_annotations.json HF: Icey444/VisualJudge_images (prediction URLs) + images_v2/lowlevel.json (source URLs)
Generation data_json_v2/generation_annotations.json HF: Icey444/VisualJudge_images (prediction URLs) + images_v2/generation_images.json (source URLs)

Predictions are encoded locally by task-specific encoder scripts (src/encoders/encode_*.py) and stored under output/encoded_v2/. Original images are cached as original_{image_id}.png in the same directory.