Icey444 commited on
Commit
18d3063
·
verified ·
1 Parent(s): 65264e3

Upload docs/

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +1 -0
  2. docs/ENCODER_SCRIPTS.md +19 -0
  3. docs/ENCODING_VARIANTS.md +52 -0
  4. docs/ENCODING_VARIANT_COUNTS.md +34 -0
  5. docs/PIPELINE_OVERVIEW.md +116 -0
  6. docs/TASK_REPRESENTATIONS.md +95 -0
  7. docs/analysis_by_question_type.xlsx +0 -0
  8. docs/analysis_model_task.csv +31 -0
  9. docs/analysis_pairwise_encoding_by_task.csv +30 -0
  10. docs/analysis_pairwise_overall_by_task.csv +16 -0
  11. docs/analysis_ranking_by_task.csv +7 -0
  12. docs/analysis_scoring_by_task.csv +7 -0
  13. docs/encoding_variants_drop_bottom_accuracy.xlsx +0 -0
  14. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_10pct.csv +15 -0
  15. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_15pct.csv +15 -0
  16. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_20pct.csv +15 -0
  17. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_25pct.csv +15 -0
  18. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_30pct.csv +15 -0
  19. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_35pct.csv +15 -0
  20. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_40pct.csv +15 -0
  21. docs/encoding_variants_drop_bottom_accuracy/drop_bottom_5pct.csv +15 -0
  22. docs/encoding_variants_summary.csv +106 -0
  23. docs/encoding_variants_summary_results.csv +106 -0
  24. docs/encoding_variants_summary_v2.1.csv +106 -0
  25. docs/encoding_variants_summary_vfinal1.csv +106 -0
  26. docs/encoding_variants_summary_vfinal2.csv +106 -0
  27. docs/encoding_variants_summary_vfinal3-sem-enc.csv +106 -0
  28. docs/my_drop.xlsx +0 -0
  29. docs/my_drop_tabs/drop_bottom_10pct.csv +15 -0
  30. docs/my_drop_tabs/drop_bottom_15pct.csv +15 -0
  31. docs/my_drop_tabs/drop_bottom_20pct.csv +15 -0
  32. docs/my_drop_tabs/drop_bottom_25pct.csv +15 -0
  33. docs/my_drop_tabs/drop_bottom_30pct.csv +15 -0
  34. docs/my_drop_tabs/drop_bottom_35pct.csv +15 -0
  35. docs/my_drop_tabs/drop_bottom_40pct.csv +15 -0
  36. docs/my_drop_tabs/drop_bottom_5pct.csv +15 -0
  37. docs/paper/doc.md +343 -0
  38. docs/paper/figs/fig1_model_accuracy.pdf +0 -0
  39. docs/paper/figs/fig1_model_accuracy.png +3 -0
  40. docs/paper/figs/fig2_task_heatmap.pdf +0 -0
  41. docs/paper/figs/fig2_task_heatmap.png +3 -0
  42. docs/paper/figs/fig3_modality.pdf +0 -0
  43. docs/paper/figs/fig3_modality.png +3 -0
  44. docs/paper/figs/fig4_encoding_stems.pdf +0 -0
  45. docs/paper/figs/fig4_encoding_stems.png +3 -0
  46. docs/paper/figs/fig5_error_type.pdf +0 -0
  47. docs/paper/figs/fig5_error_type.png +3 -0
  48. docs/paper/figs/fig6_instance_complexity.pdf +0 -0
  49. docs/paper/figs/fig6_instance_complexity.png +3 -0
  50. docs/paper/figs/fig_encoding_top3.pdf +3 -0
.gitattributes CHANGED
@@ -58,3 +58,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  pairwise_prompts_v2.1.json filter=lfs diff=lfs merge=lfs -text
 
 
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
  pairwise_prompts_v2.1.json filter=lfs diff=lfs merge=lfs -text
61
+ docs/paper/figs/fig_encoding_top3.pdf filter=lfs diff=lfs merge=lfs -text
docs/ENCODER_SCRIPTS.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Per-task encoder scripts
2
+
3
+ Each **complex** prediction type has its own Python file that reads how predictions are recorded (bbox, polygon, keypoint, etc.) and outputs one image or text file per encoding variant.
4
+
5
+ | Task | Script | Prediction format | Outputs |
6
+ |------|--------|--------------------|---------|
7
+ | **depth_estimation** | `src/encoders/encode_depth.py` | `predictions[].url` (3 colormap URLs) | 3 images per sample: plasma, turbo, gray |
8
+ | **object_detection** | `src/encoders/encode_object_detection.py` | `predictions_type: "bbox"`, `predictions[].bbox` [x1,y1,x2,y2], `label`, `color_hex` | original, box_only, box_label, text_xyxy.txt, text_xywh.txt |
9
+ | **instance_segmentation** | `src/encoders/encode_instance_segmentation.py` | Polygon: `predictions[].polygon` [[x,y],...]. RLE: `predictions[].rle` (needs pycocotools to decode) | original, enc1..enc6 (overlay images), enc7_json.txt |
10
+ | **semantic_segmentation** | `src/encoders/encode_semantic_segmentation.py` | `predictions[].polygon` [[[x,y],...]] or [[x,y],...], `label` | same 7 as instance (enc1–7) |
11
+ | **referring_segmentation** | `src/encoders/encode_referring_segmentation.py` | `predictions[].polygon`, `label` (referring expression) | original, fill, contour, fill_contour, json.txt |
12
+ | **keypoint** | `src/encoders/encode_keypoint.py` | `predictions[].keypoint` (17×3: x,y,conf per COCO keypoint) | original, enc_a (same color), enc_b (per-person), enc_c (per-limb), enc_d_json.txt |
13
+ | **generation_*** / **lowlevel-*** | `src/encoders/encode_generation_lowlevel.py` | `predictions[].url` or `predictions[].image` (prediction image URL) | 1 image per sample |
14
+
15
+ **Shared:** `src/common.py` provides `load_image_id_to_url()` (from `images/*.json`) and `fetch_image_cv2(url)` for overlay tasks.
16
+
17
+ **Requirement for overlay tasks:** `images/*.json` (or root `images.json`) must list each image with `id` and `url` so that `image_id` in annotations can be resolved to the original image URL.
18
+
19
+ **Run all:** `./run_encoding.sh` (uses 24 threads, 2 samples per task).
docs/ENCODING_VARIANTS.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Encoding variants per task (from ref/task_encode_req.txt)
2
+
3
+ Each task must be visualized in **N** ways = full combination of the listed variables.
4
+
5
+ **Required for overlay encodings:** `images/*.json` (or root `images.json`) must list each image with `id` and `url` (original image URL). The annotation’s `image_id` is used to look up this URL; overlays are drawn on the image fetched from that URL.
6
+
7
+ ## Object Detection
8
+ - **Pixel**: Style {box-only*, box+label}, Mask embedding {overlay*, separate}, Label format {none*, color-description}
9
+ - **Text**: Format {[x1,y1,x2,y2]*, [x,y,w,h]}
10
+ - **Modality**: {image-only*, text-only, image+text}
11
+ - Combinations: 2×2×2×2×3 = **48** (if all independent)
12
+
13
+ ## Instance Segmentation
14
+ - **Pixel**: Sub-sampling {sub-sampled, original}, Opacity α {0.3, 0.5*, 1.0}, Label overlay {none*, text-labels}, Color scheme {color-by-class*, color-by-instance}
15
+ - **Text**: Format {polygon*, RLE}
16
+ - **Modality**: {image-only*, text-only, image+text}
17
+ - Ref implements 7 encodings (enc1–enc7).
18
+
19
+ ## Semantic Segmentation
20
+ - **Pixel**: Sub-sampling, Opacity α {0.3, 0.5*, 1.0}, Label overlay {none*, text-labels}, Color scheme {standard-palette*, random}
21
+ - **Modality**: {image-only*, text-only, image+text}
22
+
23
+ ## Referring Segmentation
24
+ - **Pixel**: Sub-sampling, Opacity α {0.3, 0.5*, 1.0}, Mask style {fill*, contour, fill+contour}
25
+ - **Text**: Format {polygon*, RLE}
26
+ - **Modality**: {image-only*, text-only}
27
+
28
+ ## Keypoint
29
+ - **Pixel**: Style {points-only, skeleton*}, Color scheme {same-color, color-by-instance*, color-by-part}
30
+ - **Text**: Format {flat-list, part-keyed-JSON*, COCO-style}
31
+ - **Modality**: {image-only*, text-only, best-visual+text}
32
+ - Ref implements 4 encodings (a,b,c,d).
33
+
34
+ ## Depth
35
+ - **Pixel**: Colormap {plasma*, viridis, turbo} — **3 ways** (pre-rendered; just download).
36
+ - **Modality**: image-only.
37
+
38
+ ## Low-level / Generation
39
+ - No encoding variants (edited/generated images only).
40
+
41
+
42
+ task – always set to the task name.
43
+ pixel_style – e.g. box_only, box_label, points_only, skeleton (only for object_detection and keypoint).
44
+ mask_embed – overlay or separate (segmentation tasks).
45
+ label_fmt – none, color_filled_rect, text_only (object_detection only).
46
+ sub_sampling – original or sub_sampled (instance/semantic/referring).
47
+ opacity – 0.3, 0.5, 1.0 (segmentation).
48
+ label_overlay – none or text_labels (instance/semantic).
49
+ color_scheme – e.g. color_by_class, standard_palette, same_color (instance/semantic/keypoint).
50
+ mask_style – fill, contour, fill_contour (referring only).
51
+ text_format – e.g. xyxy, polygon, flat_list (per-task text formats).
52
+ colormap – plasma, turbo, gray (depth only).
docs/ENCODING_VARIANT_COUNTS.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Full encoding variant counts (ref/task_encode_req.txt)
2
+
3
+ **Current output: image-only + text-only** (image+text combinations not generated yet).
4
+
5
+ ## Object Detection
6
+ - **Pixel:** Style(2) × Mask(2) × Label format(3) = **12** image-only
7
+ - **Text:** Format(2) = **2** text-only
8
+ - **Output:** 12 + 2 = **14** per sample ✅
9
+
10
+ ## Instance Segmentation
11
+ - **Pixel:** Sub-sampling(2) × Opacity(3) × Label overlay(2) × Color scheme(2) = **24**
12
+ - **Text:** Format(2): polygon, RLE
13
+ - **Output:** 24 + 2 = **26** per sample ✅
14
+
15
+ ## Semantic Segmentation
16
+ - **Pixel:** Sub-sampling(2) × Opacity(3) × Label overlay(2) × Color scheme(2) = **24**
17
+ - **Text:** Format(2): polygon, RLE
18
+ - **Output:** 24 + 2 = **26** per sample ✅
19
+
20
+ ## Referring Segmentation
21
+ - **Pixel:** Sub-sampling(2) × Opacity(3) × Mask style(3) = **18**
22
+ - **Text:** Format(2): polygon, RLE
23
+ - **Output:** 18 + 2 = **20** per sample ✅
24
+
25
+ ## Keypoint
26
+ - **Pixel:** Style(2): points-only, skeleton × Color scheme(3) = **6**
27
+ - **Text:** Format(3): flat-list, part-keyed-JSON, COCO-style = **3**
28
+ - **Output:** 6 + 3 = **9** per sample ✅
29
+
30
+ ## Depth
31
+ - **Pixel:** Colormap(3). **Modality:** image-only → **3** ✅
32
+
33
+ ## Low-level / Generation
34
+ - No variations (single output type each).
docs/PIPELINE_OVERVIEW.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pipeline Overview: `run_encoding.sh` → `run_pairwise_prompts_vfinal.sh`
2
+
3
+ ## What each script does
4
+
5
+ ### `run_encoding.sh` — Step 1: Encode annotations into visual/text representations
6
+
7
+ Converts raw annotations into per-annotation encoded outputs (images or text strings) that can be embedded into LLM prompts.
8
+
9
+ **Inputs**
10
+ - `annotations/` — task annotation JSONs (e.g. `object_detection.json`, `keypoint.json`, …)
11
+ - `images/` — image metadata JSONs mapping `image_id` → URL/path
12
+
13
+ **Steps**
14
+ 1. **Inventory** (`src/preprocess/inventory_annotations.py`) — scans `annotations/` and logs unique `(task, prediction_type, prediction_keys)` combinations.
15
+ 2. **Encode** — runs one encoder per task in parallel (up to `NUM_THREADS=24` threads each):
16
+ - `encode_depth.py`
17
+ - `encode_object_detection.py`
18
+ - `encode_instance_segmentation.py`
19
+ - `encode_semantic_segmentation.py`
20
+ - `encode_referring_segmentation.py`
21
+ - `encode_keypoint.py`
22
+ - *(generation / low-level tasks disabled but present as commented code)*
23
+ 3. **Verify** (`src/verify/verify_and_visualize_variants.py`) — checks variant consistency and produces `output/encoding_variants_visual.png`.
24
+
25
+ **Outputs** → `output/encoded/<task>/` — one encoded file (image or text) per annotation per encoding variant.
26
+
27
+ **Key flags**
28
+ | Variable | Default | Effect |
29
+ |---|---|---|
30
+ | `SAMPLES` | `2` | Number of `(image_id, instance_type, synthetic?)` combos to process; `0` = all |
31
+ | `SKIP_SUBSAMPLING` | `0` | Skip instance/semantic/referring segmentation encoders |
32
+ | `BASE_ONLY` | `0` | Only produce `is_base==1` encodings |
33
+ | `BASE_ABLATION` | `0` | Produce base + 1-variant ablation set |
34
+
35
+ ---
36
+
37
+ ### `run_pairwise_prompts_vfinal.sh` — Step 2: Build pairwise and scoring prompt JSON
38
+
39
+ Samples pairs of model predictions from the encoded pool and wraps them into structured LLM judge prompts (pairwise comparison + scoring).
40
+
41
+ **Inputs**
42
+ - `output/encoded/` — outputs from Step 1 (or pass `--skip-encode` to reuse existing)
43
+ - `docs/encoding_variants_summary_vfinal3-sem-enc.csv` — CSV listing which encoding variant IDs are `is_final==1`; controls which encodings are used
44
+ - `annotations/` (via the same `ANNOTATIONS_DIR` path) — annotation JSONs for grouping and sampling
45
+ - Original images (fetched on demand from URL if not cached locally)
46
+
47
+ **Steps**
48
+ 1. **Encode (inline)** — unless `--skip-encode` is passed, re-runs encoders limited to `is_final==1` variants from the CSV.
49
+ 2. **Pairwise prompts** (`src/pairwise/build_pairwise_prompts_vfinal.py`) — for each task, samples up to `IMAGES_PER_TASK` unique image IDs and `PAIRS_PER_TASK=40` annotation pairs, then builds a structured prompt JSON with images and instructions.
50
+ 3. **Scoring prompts** (`src/scoring/build_scoring_prompts.py`) — samples `SCORING_COMBOS_PER_TASK=20` `(image_id, coi, error_type, prompt)` combos × `SCORING_ANNOTATIONS_PER_COMBO=5` annotations each, appended to the same JSON with `question_type="scoring"`.
51
+ 4. **Convert messages** (`src/tools/convert_messages.py`) — transforms the raw prompt JSON into a `_messages.json` format ready to send to an LLM API.
52
+
53
+ **Outputs**
54
+ - `output/pairwise/pairwise_prompts_vFinal3-sem-enc.json`
55
+ - `output/pairwise/pairwise_prompts_vFinal3-sem-enc_messages.json` ← primary deliverable
56
+
57
+ ---
58
+
59
+ ## Can they be a single pipeline driven by `data_json_v2` + `images_v2`?
60
+
61
+ **Yes — with environment variable overrides.** The codebase is already wired for this:
62
+
63
+ - `src/common.py` reads `$JEP_ANNOTATIONS_DIR` (default: `annotations/`) and `$JEP_IMAGES_DIR` (default: `images/`) for all encoders.
64
+ - `src/tools/reencode_and_push_annotation_rank_to_hub.py` shows the pattern in use: it sets `JEP_ANNOTATIONS_DIR` and `JEP_IMAGES_DIR` before calling encoders.
65
+
66
+ **However**, two modules still hard-code `ROOT / "annotations"` and are **not** yet env-aware:
67
+ - `src/preprocess/inventory_annotations.py` (line 15)
68
+ - `src/pairwise/build_pairwise_prompts.py` (line 40)
69
+
70
+ ### Minimal changes needed to support `data_json_v2` / `images_v2`
71
+
72
+ 1. **`src/preprocess/inventory_annotations.py`** — replace hard-coded path:
73
+ ```python
74
+ # Before
75
+ ANNOTATIONS_DIR = ROOT / "annotations"
76
+ # After
77
+ ANNOTATIONS_DIR = Path(os.environ.get("JEP_ANNOTATIONS_DIR", str(ROOT / "annotations")))
78
+ ```
79
+
80
+ 2. **`src/pairwise/build_pairwise_prompts.py`** — same fix at line 40 (and optionally `IMAGES_DIR`).
81
+
82
+ 3. **Run the pipeline** with env overrides:
83
+ ```bash
84
+ export JEP_ANNOTATIONS_DIR=/raid/icy/judge-encode-prompt/data_json_v2
85
+ export JEP_IMAGES_DIR=/raid/icy/judge-encode-prompt/images_v2
86
+ bash run_encoding.sh
87
+ bash run_pairwise_prompts_vfinal.sh
88
+ ```
89
+
90
+ No other changes are required — all encoders inherit the env vars via `src/common.py`.
91
+
92
+ ---
93
+
94
+ ## End-to-end data flow
95
+
96
+ ```
97
+ data_json_v2/ images_v2/
98
+ *.json (annotations) *.json (image metadata: image_id → URL/path)
99
+ │ │
100
+ └──────────┬────────────┘
101
+
102
+ run_encoding.sh
103
+ inventory_annotations.py → logs task inventory
104
+ encode_<task>.py (×6, parallel) → output/encoded/<task>/
105
+ verify_and_visualize_variants.py
106
+
107
+
108
+ run_pairwise_prompts_vfinal.sh
109
+ build_pairwise_prompts_vfinal.py → pairwise prompt JSON
110
+ build_scoring_prompts.py → scoring prompts appended
111
+ convert_messages.py
112
+
113
+
114
+ output/pairwise/pairwise_prompts_vFinal3-sem-enc_messages.json
115
+ (LLM-ready judge prompts)
116
+ ```
docs/TASK_REPRESENTATIONS.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task & prediction type inventory
2
+
3
+ Generated by `src/preprocess/inventory_annotations.py`. Use this to index what representation types exist per CV task.
4
+
5
+ ## Unique values across all annotations
6
+
7
+ ### Tasks
8
+ ```
9
+ depth_estimation generation_controllable generation_editing generation_inpainting_high_level generation_inpainting_low_level generation_t2i instance_segmentation keypoint lowlevel-deblur lowlevel-derain lowlevel-desnow lowlevel-super-resolution object_detection referring_segmentation semantic_segmentation
10
+ ```
11
+
12
+ ### Predictions type (field: predictions_type / prediction_type)
13
+ ```
14
+ bbox image keypoint mask npy polygon rle
15
+ ```
16
+
17
+ ### Prediction-item keys (keys inside each element of `predictions[]`) by task
18
+
19
+ - **depth_estimation**: `image, label, npy, url`
20
+ - **generation_controllable**: `image, url`
21
+ - **generation_editing**: `image, url`
22
+ - **generation_inpainting_high_level**: `image, url`
23
+ - **generation_inpainting_low_level**: `image, url`
24
+ - **generation_t2i**: `image, url`
25
+ - **instance_segmentation**: `bbox, class_iou, instance_type, label, matched_gt_iscrowd, matrix, other_metadata, polygon, rle, score`
26
+ - **keypoint**: `error_type, keypoint, label, score`
27
+ - **lowlevel-deblur**: `image, url`
28
+ - **lowlevel-derain**: `image, url`
29
+ - **lowlevel-desnow**: `image, url`
30
+ - **lowlevel-super-resolution**: `image, url`
31
+ - **object_detection**: `bbox, class_id, color_hex, file_path, label`
32
+ - **referring_segmentation**: `label, matrix, polygon`
33
+ - **semantic_segmentation**: `class_id, class_iou, color_hex, image, label, matrix, polygon, url`
34
+
35
+ ## Per-file summary
36
+
37
+ ### depth_annotations.json
38
+ - Tasks: `['depth_estimation']`
39
+ - predictions_type: `['npy']`
40
+ - prediction item keys: `['image', 'label', 'npy', 'url']`
41
+ - annotation count: 358
42
+
43
+ ### generation_annotations.json
44
+ - Tasks: `['generation_controllable', 'generation_editing', 'generation_inpainting_high_level', 'generation_inpainting_low_level', 'generation_t2i']`
45
+ - predictions_type: `['image']`
46
+ - prediction item keys: `['image', 'url']`
47
+ - annotation count: 248
48
+
49
+ ### instance_segmentation_model_annotations.json
50
+ - Tasks: `['instance_segmentation']`
51
+ - predictions_type: `['rle']`
52
+ - prediction item keys: `['bbox', 'class_iou', 'instance_type', 'label', 'matched_gt_iscrowd', 'matrix', 'other_metadata', 'rle', 'score']`
53
+ - annotation count: 362
54
+
55
+ ### instance_segmentation_synthetic_annotations.json
56
+ - Tasks: `['instance_segmentation']`
57
+ - predictions_type: `['polygon']`
58
+ - prediction item keys: `['label', 'matrix', 'polygon']`
59
+ - annotation count: 2021
60
+
61
+ ### keypoint_annotations.json
62
+ - Tasks: `['keypoint']`
63
+ - predictions_type: `['keypoint']`
64
+ - prediction item keys: `['error_type', 'keypoint', 'label', 'score']`
65
+ - annotation count: 366
66
+
67
+ ### lowlevel_annotations.json
68
+ - Tasks: `['lowlevel-deblur', 'lowlevel-derain', 'lowlevel-desnow', 'lowlevel-super-resolution']`
69
+ - predictions_type: `['image']`
70
+ - prediction item keys: `['image', 'url']`
71
+ - annotation count: 3005
72
+
73
+ ### object_detection_annotations.json
74
+ - Tasks: `['object_detection']`
75
+ - predictions_type: `['bbox']`
76
+ - prediction item keys: `['bbox', 'class_id', 'color_hex', 'file_path', 'label']`
77
+ - annotation count: 1164
78
+
79
+ ### referring_segmentation_model_annotations.json
80
+ - Tasks: `['referring_segmentation']`
81
+ - predictions_type: `['polygon']`
82
+ - prediction item keys: `['label', 'matrix', 'polygon']`
83
+ - annotation count: 500
84
+
85
+ ### referring_segmentation_synthetic_annotations.json
86
+ - Tasks: `['referring_segmentation']`
87
+ - predictions_type: `['polygon']`
88
+ - prediction item keys: `['label', 'matrix', 'polygon']`
89
+ - annotation count: 2017
90
+
91
+ ### semantic_segmentation_annotations.json
92
+ - Tasks: `['semantic_segmentation']`
93
+ - predictions_type: `['mask', 'polygon']`
94
+ - prediction item keys: `['class_id', 'class_iou', 'color_hex', 'image', 'label', 'matrix', 'polygon', 'url']`
95
+ - annotation count: 936
docs/analysis_by_question_type.xlsx ADDED
Binary file (11.1 kB). View file
 
docs/analysis_model_task.csv ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model,task,accuracy,correct,total
2
+ Qwen2.5-VL-3B-Instruct-judge-enc,depth_estimation,0.6118,208,340
3
+ Qwen2.5-VL-3B-Instruct-judge-enc,generation_controllable,0.5816,196,337
4
+ Qwen2.5-VL-3B-Instruct-judge-enc,generation_editing,0.55,187,340
5
+ Qwen2.5-VL-3B-Instruct-judge-enc,generation_inpainting_high_level,0.6147,209,340
6
+ Qwen2.5-VL-3B-Instruct-judge-enc,generation_inpainting_low_level,0.6029,205,340
7
+ Qwen2.5-VL-3B-Instruct-judge-enc,generation_t2i,0.6912,235,340
8
+ Qwen2.5-VL-3B-Instruct-judge-enc,instance_segmentation,0.6161,626,1016
9
+ Qwen2.5-VL-3B-Instruct-judge-enc,keypoint,0.6529,222,340
10
+ Qwen2.5-VL-3B-Instruct-judge-enc,lowlevel-deblur,0.7818,172,220
11
+ Qwen2.5-VL-3B-Instruct-judge-enc,lowlevel-derain,0.8765,298,340
12
+ Qwen2.5-VL-3B-Instruct-judge-enc,lowlevel-desnow,0.9149,301,329
13
+ Qwen2.5-VL-3B-Instruct-judge-enc,lowlevel-super-resolution,0.5441,185,340
14
+ Qwen2.5-VL-3B-Instruct-judge-enc,object_detection,0.7422,757,1020
15
+ Qwen2.5-VL-3B-Instruct-judge-enc,referring_segmentation,0.5833,595,1020
16
+ Qwen2.5-VL-3B-Instruct-judge-enc,semantic_segmentation,0.6079,727,1196
17
+ Qwen2.5-VL-7B-Instruct-judge-enc,depth_estimation,0.7176,244,340
18
+ Qwen2.5-VL-7B-Instruct-judge-enc,generation_controllable,0.6103,166,272
19
+ Qwen2.5-VL-7B-Instruct-judge-enc,generation_editing,0.5605,190,339
20
+ Qwen2.5-VL-7B-Instruct-judge-enc,generation_inpainting_high_level,0.6206,211,340
21
+ Qwen2.5-VL-7B-Instruct-judge-enc,generation_inpainting_low_level,0.5941,202,340
22
+ Qwen2.5-VL-7B-Instruct-judge-enc,generation_t2i,0.6735,229,340
23
+ Qwen2.5-VL-7B-Instruct-judge-enc,instance_segmentation,0.6248,631,1010
24
+ Qwen2.5-VL-7B-Instruct-judge-enc,keypoint,0.6706,228,340
25
+ Qwen2.5-VL-7B-Instruct-judge-enc,lowlevel-deblur,0.7318,161,220
26
+ Qwen2.5-VL-7B-Instruct-judge-enc,lowlevel-derain,0.9059,308,340
27
+ Qwen2.5-VL-7B-Instruct-judge-enc,lowlevel-desnow,0.9331,307,329
28
+ Qwen2.5-VL-7B-Instruct-judge-enc,lowlevel-super-resolution,0.5971,203,340
29
+ Qwen2.5-VL-7B-Instruct-judge-enc,object_detection,0.7529,768,1020
30
+ Qwen2.5-VL-7B-Instruct-judge-enc,referring_segmentation,0.6206,633,1020
31
+ Qwen2.5-VL-7B-Instruct-judge-enc,semantic_segmentation,0.6279,751,1196
docs/analysis_pairwise_encoding_by_task.csv ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,encoding,encoding_id,accuracy,correct,total
2
+ depth_estimation,plasma,97,0.6647,452,680
3
+ generation_controllable,generation_controllable,,0.5944,362,609
4
+ generation_editing,generation_editing,,0.5552,377,679
5
+ generation_inpainting_high_level,generation_inpainting_high_level,,0.6176,420,680
6
+ generation_inpainting_low_level,generation_inpainting_low_level,,0.5985,407,680
7
+ generation_t2i,generation_t2i,,0.6824,464,680
8
+ instance_segmentation,1742,1742,0.5893,396,672
9
+ instance_segmentation,pixel_ss1_m0_o0_l1_c1_b1,17,0.6391,425,665
10
+ instance_segmentation,text_polygon,42,0.6328,436,689
11
+ keypoint,pixel_s1_c1_m0,89,0.6618,450,680
12
+ lowlevel-deblur,lowlevel-deblur,,0.7568,333,440
13
+ lowlevel-derain,lowlevel-derain,,0.8912,606,680
14
+ lowlevel-desnow,lowlevel-desnow,,0.924,608,658
15
+ lowlevel-super-resolution,lowlevel-super-resolution,,0.5706,388,680
16
+ object_detection,0305,305,0.7662,521,680
17
+ object_detection,pixel_s1_m0,3,0.6765,460,680
18
+ object_detection,text_xyxy,5,0.8,544,680
19
+ referring_segmentation,7080,7080,0.6029,410,680
20
+ referring_segmentation,pixel_ss1_m0_o0_m2,70,0.5809,395,680
21
+ referring_segmentation,text_polygon,80,0.6221,423,680
22
+ semantic_segmentation,4649,4649,0.6288,454,722
23
+ semantic_segmentation,pixel_ss0_m0,44,0.3333,2,6
24
+ semantic_segmentation,pixel_ss0_m1,45,0.5,3,6
25
+ semantic_segmentation,pixel_ss1_m0_o0_l1_c0,49,0.608,439,722
26
+ semantic_segmentation,pixel_ss1_m0_o0_l1_c1,50,0.5625,27,48
27
+ semantic_segmentation,pixel_ss1_m0_o1_l1_c0,53,0.5625,27,48
28
+ semantic_segmentation,pixel_ss1_m1_o0_l1_c0,57,0.6429,45,70
29
+ semantic_segmentation,text_matrix,46,0.6274,453,722
30
+ semantic_segmentation,text_polygon,63,0.5833,28,48
docs/analysis_pairwise_overall_by_task.csv ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,accuracy,correct,total
2
+ depth_estimation,0.6647,452,680
3
+ generation_controllable,0.5944,362,609
4
+ generation_editing,0.5552,377,679
5
+ generation_inpainting_high_level,0.6176,420,680
6
+ generation_inpainting_low_level,0.5985,407,680
7
+ generation_t2i,0.6824,464,680
8
+ instance_segmentation,0.6204,1257,2026
9
+ keypoint,0.6618,450,680
10
+ lowlevel-deblur,0.7568,333,440
11
+ lowlevel-derain,0.8912,606,680
12
+ lowlevel-desnow,0.924,608,658
13
+ lowlevel-super-resolution,0.5706,388,680
14
+ object_detection,0.7475,1525,2040
15
+ referring_segmentation,0.602,1228,2040
16
+ semantic_segmentation,0.6179,1478,2392
docs/analysis_ranking_by_task.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ task,accuracy_exact,accuracy_top1,correct_exact,correct_top1,total
2
+ depth_estimation,0.3281,0.4219,21,27,64
3
+ instance_segmentation,0.3478,0.5,48,69,138
4
+ keypoint,0.0,0.1935,0,6,31
5
+ object_detection,0.4569,0.5259,53,61,116
6
+ referring_segmentation,0.1983,0.3554,24,43,121
7
+ semantic_segmentation,0.0324,0.1871,9,52,278
docs/analysis_scoring_by_task.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ task,accuracy,correct,total
2
+ depth_estimation,0.3,72,240
3
+ instance_segmentation,0.3463,169,488
4
+ keypoint,0.1211,23,190
5
+ object_detection,0.1376,71,516
6
+ referring_segmentation,0.1343,87,648
7
+ semantic_segmentation,0.2744,422,1538
docs/encoding_variants_drop_bottom_accuracy.xlsx ADDED
Binary file (13.6 kB). View file
 
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_10pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,331,128
3
+ generation_controllable,396,39
4
+ generation_editing,400,40
5
+ generation_inpainting_high_level,400,50
6
+ generation_inpainting_low_level,400,80
7
+ generation_t2i,400,400
8
+ instance_segmentation,283,35
9
+ keypoint,260,82
10
+ lowlevel-deblur,249,40
11
+ lowlevel-derain,368,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,341,40
14
+ object_detection,345,38
15
+ referring_segmentation,349,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_15pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,331,128
3
+ generation_controllable,207,38
4
+ generation_editing,345,40
5
+ generation_inpainting_high_level,400,50
6
+ generation_inpainting_low_level,400,80
7
+ generation_t2i,400,400
8
+ instance_segmentation,283,35
9
+ keypoint,260,82
10
+ lowlevel-deblur,249,40
11
+ lowlevel-derain,368,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,315,40
14
+ object_detection,345,38
15
+ referring_segmentation,349,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_20pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,331,128
3
+ generation_controllable,207,38
4
+ generation_editing,242,40
5
+ generation_inpainting_high_level,232,50
6
+ generation_inpainting_low_level,400,80
7
+ generation_t2i,400,400
8
+ instance_segmentation,283,35
9
+ keypoint,260,82
10
+ lowlevel-deblur,249,40
11
+ lowlevel-derain,368,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,315,40
14
+ object_detection,345,38
15
+ referring_segmentation,349,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_25pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,331,128
3
+ generation_controllable,207,38
4
+ generation_editing,242,40
5
+ generation_inpainting_high_level,217,50
6
+ generation_inpainting_low_level,304,79
7
+ generation_t2i,272,272
8
+ instance_segmentation,278,34
9
+ keypoint,260,82
10
+ lowlevel-deblur,249,40
11
+ lowlevel-derain,368,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,315,40
14
+ object_detection,319,38
15
+ referring_segmentation,349,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_30pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,331,128
3
+ generation_controllable,207,38
4
+ generation_editing,242,40
5
+ generation_inpainting_high_level,217,50
6
+ generation_inpainting_low_level,304,79
7
+ generation_t2i,272,272
8
+ instance_segmentation,207,30
9
+ keypoint,260,82
10
+ lowlevel-deblur,249,40
11
+ lowlevel-derain,368,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,315,40
14
+ object_detection,248,35
15
+ referring_segmentation,221,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_35pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,178,100
3
+ generation_controllable,207,38
4
+ generation_editing,242,40
5
+ generation_inpainting_high_level,217,50
6
+ generation_inpainting_low_level,304,79
7
+ generation_t2i,272,272
8
+ instance_segmentation,193,29
9
+ keypoint,190,75
10
+ lowlevel-deblur,249,40
11
+ lowlevel-derain,368,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,315,40
14
+ object_detection,247,35
15
+ referring_segmentation,188,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_40pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,121,83
3
+ generation_controllable,207,38
4
+ generation_editing,242,40
5
+ generation_inpainting_high_level,217,50
6
+ generation_inpainting_low_level,304,79
7
+ generation_t2i,272,272
8
+ instance_segmentation,193,29
9
+ keypoint,190,75
10
+ lowlevel-deblur,113,40
11
+ lowlevel-derain,291,40
12
+ lowlevel-desnow,344,60
13
+ lowlevel-super-resolution,315,40
14
+ object_detection,247,35
15
+ referring_segmentation,188,39
docs/encoding_variants_drop_bottom_accuracy/drop_bottom_5pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id
2
+ depth_estimation,400,138
3
+ generation_controllable,396,39
4
+ generation_editing,400,40
5
+ generation_inpainting_high_level,400,50
6
+ generation_inpainting_low_level,400,80
7
+ generation_t2i,400,400
8
+ instance_segmentation,283,35
9
+ keypoint,318,84
10
+ lowlevel-deblur,258,40
11
+ lowlevel-derain,400,40
12
+ lowlevel-desnow,387,60
13
+ lowlevel-super-resolution,400,40
14
+ object_detection,345,38
15
+ referring_segmentation,349,39
docs/encoding_variants_summary.csv ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoding_id,is_final,is_base,is_base_ablation,task,pixel_style,mask_embed,label_fmt,sub_sampling,opacity,label_overlay,color_scheme,instance_bbox,mask_style,text_format,colormap,combo_text_id,combo_image_id,combo_stem
2
+ 1,0,0,1,object_detection,box_only,overlay,-,-,-,-,-,-,-,-,-,-,-,-
3
+ 2,0,0,0,object_detection,box_only,separate,-,-,-,-,-,-,-,-,-,,,
4
+ 3,1,1,1,object_detection,box_label,overlay,-,-,-,-,-,-,-,-,-,,,
5
+ 4,0,0,1,object_detection,box_label,separate,-,-,-,-,-,-,-,-,-,,,
6
+ 5,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,xyxy,-,,,
7
+ 6,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xywh,-,,,
8
+ 7,0,0,1,instance_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
9
+ 8,0,0,1,instance_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
10
+ 9,0,0,1,instance_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
11
+ 10,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,none,-,-,-,,,
12
+ 11,0,0,1,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
13
+ 12,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,none,-,-,-,,,
14
+ 13,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
15
+ 14,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
16
+ 15,0,1,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
17
+ 16,0,0,0,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
18
+ 17,1,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
19
+ 18,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,none,-,-,-,,,
20
+ 19,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,dashed,-,-,-,,,
21
+ 20,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,none,-,-,-,,,
22
+ 21,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,dashed,-,-,-,,,
23
+ 22,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,none,-,-,-,,,
24
+ 23,0,0,1,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
25
+ 24,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
26
+ 25,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
27
+ 26,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,none,-,-,-,,,
28
+ 27,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
29
+ 28,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,none,-,-,-,,,
30
+ 29,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
31
+ 30,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
32
+ 31,0,0,1,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
33
+ 32,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
34
+ 33,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
35
+ 34,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,none,-,-,-,,,
36
+ 35,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,dashed,-,-,-,,,
37
+ 36,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,none,-,-,-,,,
38
+ 37,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,dashed,-,-,-,,,
39
+ 38,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,none,-,-,-,,,
40
+ 39,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
41
+ 40,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
42
+ 41,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
43
+ 42,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
44
+ 43,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
45
+ 44,0,0,1,semantic_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
46
+ 45,0,0,1,semantic_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
47
+ 46,1,0,1,semantic_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
48
+ 47,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,none,standard_palette,-,-,-,-,,,
49
+ 48,0,0,0,semantic_segmentation,-,overlay,-,original,0.5,none,random,-,-,-,-,,,
50
+ 49,1,1,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
51
+ 50,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,random,-,-,-,-,,,
52
+ 51,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,standard_palette,-,-,-,-,,,
53
+ 52,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,random,-,-,-,-,,,
54
+ 53,0,0,1,semantic_segmentation,-,overlay,-,original,1,text_labels,standard_palette,-,-,-,-,,,
55
+ 54,0,0,0,semantic_segmentation,-,overlay,-,original,1,text_labels,random,-,-,-,-,,,
56
+ 55,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,standard_palette,-,-,-,-,,,
57
+ 56,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,random,-,-,-,-,,,
58
+ 57,0,0,1,semantic_segmentation,-,separate,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
59
+ 58,0,0,0,semantic_segmentation,-,separate,-,original,0.5,text_labels,random,-,-,-,-,,,
60
+ 59,0,0,0,semantic_segmentation,-,separate,-,original,1,none,standard_palette,-,-,-,-,,,
61
+ 60,0,0,0,semantic_segmentation,-,separate,-,original,1,none,random,-,-,-,-,,,
62
+ 61,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,standard_palette,-,-,-,-,,,
63
+ 62,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,random,-,-,-,-,,,
64
+ 63,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
65
+ 64,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
66
+ 65,0,0,1,referring_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
67
+ 66,0,0,1,referring_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
68
+ 67,0,0,1,referring_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
69
+ 68,0,1,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill,-,-,,,
70
+ 69,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,contour,-,-,,,
71
+ 70,1,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill_contour,-,-,,,
72
+ 71,0,0,1,referring_segmentation,-,overlay,-,original,1,-,-,-,fill,-,-,,,
73
+ 72,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,contour,-,-,,,
74
+ 73,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,fill_contour,-,-,,,
75
+ 74,0,0,1,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill,-,-,,,
76
+ 75,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,contour,-,-,,,
77
+ 76,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill_contour,-,-,,,
78
+ 77,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill,-,-,,,
79
+ 78,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,contour,-,-,,,
80
+ 79,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill_contour,-,-,,,
81
+ 80,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
82
+ 81,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
83
+ 82,0,0,0,keypoint,points_only,overlay,-,-,-,-,same_color,-,-,-,-,,,
84
+ 83,0,0,1,keypoint,points_only,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
85
+ 84,0,0,0,keypoint,points_only,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
86
+ 85,0,0,0,keypoint,points_only,separate,-,-,-,-,same_color,-,-,-,-,,,
87
+ 86,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
88
+ 87,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_part,-,-,-,-,,,
89
+ 88,0,0,1,keypoint,skeleton,overlay,-,-,-,-,same_color,-,-,-,-,,,
90
+ 89,1,1,1,keypoint,skeleton,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
91
+ 90,0,0,1,keypoint,skeleton,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
92
+ 91,0,0,0,keypoint,skeleton,separate,-,-,-,-,same_color,-,-,-,-,,,
93
+ 92,0,0,1,keypoint,skeleton,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
94
+ 93,0,0,0,keypoint,skeleton,separate,-,-,-,-,color_by_part,-,-,-,-,,,
95
+ 94,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,flat_list,-,,,
96
+ 95,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,part_keyed_json,-,,,
97
+ 96,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,coco_style,-,,,
98
+ 97,1,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,plasma,,,
99
+ 98,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,turbo,,,
100
+ 99,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,gray,,,
101
+ 100,1,1,1,low_level,-,-,-,-,-,-,-,-,-,-,-,,,
102
+ 101,1,1,1,generation,-,-,-,-,-,-,-,-,-,-,-,,,
103
+ 1742,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,-,-,42,17,1742
104
+ 305,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,-,-,5,3,0305
105
+ 7080,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,-,-,80,70,7080
106
+ 4649,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,-,-,46,49,4649
docs/encoding_variants_summary_results.csv ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoding_id,is_final,is_base,is_base_ablation,task,pixel_style,mask_embed,label_fmt,sub_sampling,opacity,label_overlay,color_scheme,instance_bbox,mask_style,text_format,colormap,combo_text_id,combo_image_id,combo_stem,avg_acc,InternVL3-1B-Instruct-judge-enc,InternVL3-2B-Instruct-judge-enc,InternVL3-38B-Instruct-judge-enc,InternVL3-78B-Instruct-judge-enc,InternVL3-8B-Instruct-judge-enc,Qwen2.5-VL-32B-Instruct-judge-enc,Qwen2.5-VL-3B-Instruct-judge-enc,Qwen2.5-VL-72B-Instruct-API-judge-enc,Qwen2.5-VL-7B-Instruct-judge-enc,gemma-3-12b-it-judge-enc,gemma-3-4b-it-judge-enc
2
+ 1,0,0,1,object_detection,box_only,overlay,-,-,-,-,-,-,-,-,-,-,-,-,,,,,,,,,,,,
3
+ 2,0,0,0,object_detection,box_only,separate,-,-,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
4
+ 3,1,1,1,object_detection,box_label,overlay,-,-,-,-,-,-,-,-,-,,,,0.5915,0.4889 (incomplete),0.5059 (incomplete),0.6780 (incomplete),0.6441 (incomplete),0.5354 (incomplete),0.6650 (incomplete),0.5694,0.6522 (incomplete),0.5812,0.5981 (incomplete),0.5984 (incomplete)
5
+ 4,0,0,1,object_detection,box_label,separate,-,-,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
6
+ 5,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,xyxy,-,,,,0.6402,0.5314 (incomplete),0.6028 (incomplete),0.6644 (incomplete),0.6485 (incomplete),0.6064 (incomplete),0.7337 (incomplete),0.6801,0.8270 (incomplete),0.6809,0.5547 (incomplete),0.5416 (incomplete)
7
+ 6,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xywh,-,,,,,,,,,,,,,,,
8
+ 7,0,0,1,instance_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
9
+ 8,0,0,1,instance_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
10
+ 9,0,0,1,instance_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
11
+ 10,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
12
+ 11,0,0,1,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
13
+ 12,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
14
+ 13,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
15
+ 14,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
16
+ 15,0,1,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
17
+ 16,0,0,0,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
18
+ 17,1,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,,0.5652,0.4758 (incomplete),0.5197 (incomplete),0.6273 (incomplete),0.5263 (incomplete),0.5236 (incomplete),0.6012 (incomplete),0.5409,0.5932 (incomplete),0.5835,0.6133 (incomplete),0.6250 (incomplete)
19
+ 18,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
20
+ 19,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
21
+ 20,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
22
+ 21,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
23
+ 22,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
24
+ 23,0,0,1,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
25
+ 24,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
26
+ 25,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
27
+ 26,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
28
+ 27,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
29
+ 28,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
30
+ 29,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
31
+ 30,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
32
+ 31,0,0,1,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
33
+ 32,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
34
+ 33,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
35
+ 34,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
36
+ 35,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
37
+ 36,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
38
+ 37,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
39
+ 38,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,none,-,-,-,,,,,,,,,,,,,,,
40
+ 39,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,,,,,,,,,,,,,
41
+ 40,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,none,-,-,-,,,,,,,,,,,,,,,
42
+ 41,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,,,,,,,,,,,,,
43
+ 42,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,,0.5671,0.5288 (incomplete),0.5524 (incomplete),0.5653 (incomplete),0.6290 (incomplete),0.5568 (incomplete),0.6645 (incomplete),0.5321,0.6656 (incomplete),0.5582,0.5513 (incomplete),0.5237 (incomplete)
44
+ 43,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,,,,,,,,,,,,,
45
+ 44,0,0,1,semantic_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
46
+ 45,0,0,1,semantic_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
47
+ 46,1,0,1,semantic_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
48
+ 47,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,none,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
49
+ 48,0,0,0,semantic_segmentation,-,overlay,-,original,0.5,none,random,-,-,-,-,,,,,,,,,,,,,,,
50
+ 49,1,1,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
51
+ 50,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,random,-,-,-,-,,,,,,,,,,,,,,,
52
+ 51,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
53
+ 52,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,random,-,-,-,-,,,,,,,,,,,,,,,
54
+ 53,0,0,1,semantic_segmentation,-,overlay,-,original,1,text_labels,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
55
+ 54,0,0,0,semantic_segmentation,-,overlay,-,original,1,text_labels,random,-,-,-,-,,,,,,,,,,,,,,,
56
+ 55,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
57
+ 56,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,random,-,-,-,-,,,,,,,,,,,,,,,
58
+ 57,0,0,1,semantic_segmentation,-,separate,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
59
+ 58,0,0,0,semantic_segmentation,-,separate,-,original,0.5,text_labels,random,-,-,-,-,,,,,,,,,,,,,,,
60
+ 59,0,0,0,semantic_segmentation,-,separate,-,original,1,none,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
61
+ 60,0,0,0,semantic_segmentation,-,separate,-,original,1,none,random,-,-,-,-,,,,,,,,,,,,,,,
62
+ 61,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,standard_palette,-,-,-,-,,,,,,,,,,,,,,,
63
+ 62,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,random,-,-,-,-,,,,,,,,,,,,,,,
64
+ 63,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,,,,,,,,,,,,,
65
+ 64,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,,,,,,,,,,,,,
66
+ 65,0,0,1,referring_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
67
+ 66,0,0,1,referring_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
68
+ 67,0,0,1,referring_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,,,,,,,,,,,,,
69
+ 68,0,1,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill,-,-,,,,,,,,,,,,,,,
70
+ 69,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,contour,-,-,,,,,,,,,,,,,,,
71
+ 70,1,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill_contour,-,-,,,,0.5459,0.4864 (incomplete),0.4828 (incomplete),0.6569 (incomplete),,0.4988 (incomplete),0.6475 (incomplete),0.4668,0.6192 (incomplete),0.5437,0.5862 (incomplete),0.5012 (incomplete)
72
+ 71,0,0,1,referring_segmentation,-,overlay,-,original,1,-,-,-,fill,-,-,,,,,,,,,,,,,,,
73
+ 72,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,contour,-,-,,,,,,,,,,,,,,,
74
+ 73,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,fill_contour,-,-,,,,,,,,,,,,,,,
75
+ 74,0,0,1,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill,-,-,,,,,,,,,,,,,,,
76
+ 75,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,contour,-,-,,,,,,,,,,,,,,,
77
+ 76,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill_contour,-,-,,,,,,,,,,,,,,,
78
+ 77,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill,-,-,,,,,,,,,,,,,,,
79
+ 78,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,contour,-,-,,,,,,,,,,,,,,,
80
+ 79,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill_contour,-,-,,,,,,,,,,,,,,,
81
+ 80,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,,0.5575,0.5123 (incomplete),0.5401 (incomplete),0.5850 (incomplete),,0.5205 (incomplete),0.6425 (incomplete),0.5257,0.6452 (incomplete),0.5437,0.5726 (incomplete),0.5036 (incomplete)
82
+ 81,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,,,,,,,,,,,,,
83
+ 82,0,0,0,keypoint,points_only,overlay,-,-,-,-,same_color,-,-,-,-,,,,,,,,,,,,,,,
84
+ 83,0,0,1,keypoint,points_only,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,,,,,,,,,,,,,
85
+ 84,0,0,0,keypoint,points_only,overlay,-,-,-,-,color_by_part,-,-,-,-,,,,,,,,,,,,,,,
86
+ 85,0,0,0,keypoint,points_only,separate,-,-,-,-,same_color,-,-,-,-,,,,,,,,,,,,,,,
87
+ 86,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_instance,-,-,-,-,,,,,,,,,,,,,,,
88
+ 87,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_part,-,-,-,-,,,,,,,,,,,,,,,
89
+ 88,0,0,1,keypoint,skeleton,overlay,-,-,-,-,same_color,-,-,-,-,,,,,,,,,,,,,,,
90
+ 89,1,1,1,keypoint,skeleton,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,,0.5189,0.3980 (incomplete),0.5409 (incomplete),0.4909 (incomplete),,0.4732 (incomplete),0.5875 (incomplete),0.5533,0.5588 (incomplete),0.5631,0.5980 (incomplete),0.4829 (incomplete)
91
+ 90,0,0,1,keypoint,skeleton,overlay,-,-,-,-,color_by_part,-,-,-,-,,,,,,,,,,,,,,,
92
+ 91,0,0,0,keypoint,skeleton,separate,-,-,-,-,same_color,-,-,-,-,,,,,,,,,,,,,,,
93
+ 92,0,0,1,keypoint,skeleton,separate,-,-,-,-,color_by_instance,-,-,-,-,,,,,,,,,,,,,,,
94
+ 93,0,0,0,keypoint,skeleton,separate,-,-,-,-,color_by_part,-,-,-,-,,,,,,,,,,,,,,,
95
+ 94,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,flat_list,-,,,,,,,,,,,,,,,
96
+ 95,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,part_keyed_json,-,,,,,,,,,,,,,,,
97
+ 96,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,coco_style,-,,,,,,,,,,,,,,,
98
+ 97,1,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,plasma,,,,0.5451,0.5136 (incomplete),0.5239 (incomplete),0.5905 (incomplete),,0.5514 (incomplete),0.4925 (incomplete),0.5144,0.5947 (incomplete),0.6172,0.5962 (incomplete),0.4952 (incomplete)
99
+ 98,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,turbo,,,,,,,,,,,,,,,
100
+ 99,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,gray,,,,,,,,,,,,,,,
101
+ 100,1,1,1,low_level,-,-,-,-,-,-,-,-,-,-,-,,,,0.6587,,,,,,,,,,,
102
+ 101,1,1,1,generation,-,-,-,-,-,-,-,-,-,-,-,,,,0.5211,,,,,,,,,,,
103
+ 1742,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,-,-,42,17,1742,0.5605,0.4987 (incomplete),0.4729 (incomplete),0.6382 (incomplete),0.6667 (incomplete),0.5974 (incomplete),0.6502 (incomplete),0.5271,0.6781 (incomplete),0.4926,0.6131 (incomplete),0.5173 (incomplete)
104
+ 305,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,-,-,5,3,0305,0.6202,0.5134 (incomplete),0.5332 (incomplete),0.6739 (incomplete),0.6060 (incomplete),0.5485 (incomplete),0.7125 (incomplete),0.6376,0.7694 (incomplete),0.6549,0.6275 (incomplete),0.5428 (incomplete)
105
+ 7080,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,-,-,80,70,7080,0.5455,0.4840 (incomplete),0.4939 (incomplete),0.6261 (incomplete),,0.4927 (incomplete),0.6500 (incomplete),0.4963,0.6716 (incomplete),0.5196,0.4779 (incomplete),0.5097 (incomplete)
106
+ 4649,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,-,-,46,49,4649,,,,,,,,,,,,
docs/encoding_variants_summary_v2.1.csv ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoding_id,is_final,is_base,is_base_ablation,task,pixel_style,mask_embed,label_fmt,sub_sampling,opacity,label_overlay,color_scheme,instance_bbox,mask_style,text_format,colormap,combo_text_id,combo_image_id,combo_stem
2
+ 1,1,0,1,object_detection,box_only,overlay,-,-,-,-,-,-,-,-,-,-,-,-
3
+ 2,0,0,0,object_detection,box_only,separate,-,-,-,-,-,-,-,-,-,,,
4
+ 3,1,1,1,object_detection,box_label,overlay,-,-,-,-,-,-,-,-,-,,,
5
+ 4,1,0,1,object_detection,box_label,separate,-,-,-,-,-,-,-,-,-,,,
6
+ 5,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,xyxy,-,,,
7
+ 6,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,xywh,-,,,
8
+ 7,1,0,1,instance_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
9
+ 8,1,0,1,instance_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
10
+ 9,1,0,1,instance_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
11
+ 10,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,none,-,-,-,,,
12
+ 11,1,0,1,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
13
+ 12,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,none,-,-,-,,,
14
+ 13,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
15
+ 14,1,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
16
+ 15,1,1,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
17
+ 16,0,0,0,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
18
+ 17,1,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
19
+ 18,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,none,-,-,-,,,
20
+ 19,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,dashed,-,-,-,,,
21
+ 20,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,none,-,-,-,,,
22
+ 21,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,dashed,-,-,-,,,
23
+ 22,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,none,-,-,-,,,
24
+ 23,1,0,1,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
25
+ 24,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
26
+ 25,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
27
+ 26,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,none,-,-,-,,,
28
+ 27,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
29
+ 28,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,none,-,-,-,,,
30
+ 29,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
31
+ 30,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
32
+ 31,1,0,1,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
33
+ 32,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
34
+ 33,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
35
+ 34,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,none,-,-,-,,,
36
+ 35,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,dashed,-,-,-,,,
37
+ 36,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,none,-,-,-,,,
38
+ 37,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,dashed,-,-,-,,,
39
+ 38,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,none,-,-,-,,,
40
+ 39,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
41
+ 40,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
42
+ 41,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
43
+ 42,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
44
+ 43,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
45
+ 44,1,0,1,semantic_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
46
+ 45,1,0,1,semantic_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
47
+ 46,1,0,1,semantic_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
48
+ 47,1,0,1,semantic_segmentation,-,overlay,-,original,0.5,none,standard_palette,-,-,-,-,,,
49
+ 48,0,0,0,semantic_segmentation,-,overlay,-,original,0.5,none,random,-,-,-,-,,,
50
+ 49,1,1,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
51
+ 50,1,0,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,random,-,-,-,-,,,
52
+ 51,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,standard_palette,-,-,-,-,,,
53
+ 52,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,random,-,-,-,-,,,
54
+ 53,1,0,1,semantic_segmentation,-,overlay,-,original,1,text_labels,standard_palette,-,-,-,-,,,
55
+ 54,0,0,0,semantic_segmentation,-,overlay,-,original,1,text_labels,random,-,-,-,-,,,
56
+ 55,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,standard_palette,-,-,-,-,,,
57
+ 56,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,random,-,-,-,-,,,
58
+ 57,1,0,1,semantic_segmentation,-,separate,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
59
+ 58,0,0,0,semantic_segmentation,-,separate,-,original,0.5,text_labels,random,-,-,-,-,,,
60
+ 59,0,0,0,semantic_segmentation,-,separate,-,original,1,none,standard_palette,-,-,-,-,,,
61
+ 60,0,0,0,semantic_segmentation,-,separate,-,original,1,none,random,-,-,-,-,,,
62
+ 61,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,standard_palette,-,-,-,-,,,
63
+ 62,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,random,-,-,-,-,,,
64
+ 63,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
65
+ 64,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
66
+ 65,1,0,1,referring_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
67
+ 66,1,0,1,referring_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
68
+ 67,1,0,1,referring_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
69
+ 68,1,1,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill,-,-,,,
70
+ 69,1,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,contour,-,-,,,
71
+ 70,1,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill_contour,-,-,,,
72
+ 71,1,0,1,referring_segmentation,-,overlay,-,original,1,-,-,-,fill,-,-,,,
73
+ 72,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,contour,-,-,,,
74
+ 73,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,fill_contour,-,-,,,
75
+ 74,1,0,1,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill,-,-,,,
76
+ 75,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,contour,-,-,,,
77
+ 76,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill_contour,-,-,,,
78
+ 77,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill,-,-,,,
79
+ 78,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,contour,-,-,,,
80
+ 79,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill_contour,-,-,,,
81
+ 80,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
82
+ 81,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
83
+ 82,0,0,0,keypoint,points_only,overlay,-,-,-,-,same_color,-,-,-,-,,,
84
+ 83,1,0,1,keypoint,points_only,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
85
+ 84,0,0,0,keypoint,points_only,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
86
+ 85,0,0,0,keypoint,points_only,separate,-,-,-,-,same_color,-,-,-,-,,,
87
+ 86,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
88
+ 87,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_part,-,-,-,-,,,
89
+ 88,1,0,1,keypoint,skeleton,overlay,-,-,-,-,same_color,-,-,-,-,,,
90
+ 89,1,1,1,keypoint,skeleton,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
91
+ 90,1,0,1,keypoint,skeleton,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
92
+ 91,0,0,0,keypoint,skeleton,separate,-,-,-,-,same_color,-,-,-,-,,,
93
+ 92,1,0,1,keypoint,skeleton,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
94
+ 93,0,0,0,keypoint,skeleton,separate,-,-,-,-,color_by_part,-,-,-,-,,,
95
+ 94,1,0,1,keypoint,-,-,-,-,-,-,-,-,-,flat_list,-,,,
96
+ 95,1,0,1,keypoint,-,-,-,-,-,-,-,-,-,part_keyed_json,-,,,
97
+ 96,1,0,1,keypoint,-,-,-,-,-,-,-,-,-,coco_style,-,,,
98
+ 97,1,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,plasma,,,
99
+ 98,1,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,turbo,,,
100
+ 99,1,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,gray,,,
101
+ 100,1,1,0,low_level,-,-,-,-,-,-,-,-,-,-,-,,,
102
+ 101,1,1,0,generation,-,-,-,-,-,-,-,-,-,-,-,,,
103
+ 1742,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,-,-,42,17,1742
104
+ 305,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,-,-,5,3,0305
105
+ 7080,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,-,-,80,70,7080
106
+ 4649,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,-,-,46,49,4649
docs/encoding_variants_summary_vfinal1.csv ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoding_id,is_final,is_base,is_base_ablation,task,pixel_style,mask_embed,label_fmt,sub_sampling,opacity,label_overlay,color_scheme,instance_bbox,mask_style,text_format,colormap,combo_text_id,combo_image_id,combo_stem
2
+ 1,0,0,1,object_detection,box_only,overlay,-,-,-,-,-,-,-,-,-,,,-
3
+ 2,0,0,0,object_detection,box_only,separate,-,-,-,-,-,-,-,-,-,,,
4
+ 3,1,1,1,object_detection,box_label,overlay,-,-,-,-,-,-,-,-,-,,,
5
+ 4,0,0,1,object_detection,box_label,separate,-,-,-,-,-,-,-,-,-,,,
6
+ 5,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,xyxy,-,,,
7
+ 6,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xywh,-,,,
8
+ 7,0,0,1,instance_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
9
+ 8,0,0,1,instance_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
10
+ 9,0,0,1,instance_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
11
+ 10,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,none,-,-,-,,,
12
+ 11,0,0,1,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
13
+ 12,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,none,-,-,-,,,
14
+ 13,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
15
+ 14,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
16
+ 15,0,1,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
17
+ 16,0,0,0,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
18
+ 17,1,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
19
+ 18,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,none,-,-,-,,,
20
+ 19,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,dashed,-,-,-,,,
21
+ 20,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,none,-,-,-,,,
22
+ 21,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,dashed,-,-,-,,,
23
+ 22,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,none,-,-,-,,,
24
+ 23,0,0,1,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
25
+ 24,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
26
+ 25,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
27
+ 26,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,none,-,-,-,,,
28
+ 27,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
29
+ 28,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,none,-,-,-,,,
30
+ 29,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
31
+ 30,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
32
+ 31,0,0,1,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
33
+ 32,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
34
+ 33,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
35
+ 34,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,none,-,-,-,,,
36
+ 35,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,dashed,-,-,-,,,
37
+ 36,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,none,-,-,-,,,
38
+ 37,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,dashed,-,-,-,,,
39
+ 38,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,none,-,-,-,,,
40
+ 39,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
41
+ 40,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
42
+ 41,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
43
+ 42,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
44
+ 43,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
45
+ 44,0,0,1,semantic_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
46
+ 45,0,0,1,semantic_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
47
+ 46,0,0,1,semantic_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
48
+ 47,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,none,standard_palette,-,-,-,-,,,
49
+ 48,0,0,0,semantic_segmentation,-,overlay,-,original,0.5,none,random,-,-,-,-,,,
50
+ 49,0,1,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
51
+ 50,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,random,-,-,-,-,,,
52
+ 51,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,standard_palette,-,-,-,-,,,
53
+ 52,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,random,-,-,-,-,,,
54
+ 53,0,0,1,semantic_segmentation,-,overlay,-,original,1,text_labels,standard_palette,-,-,-,-,,,
55
+ 54,0,0,0,semantic_segmentation,-,overlay,-,original,1,text_labels,random,-,-,-,-,,,
56
+ 55,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,standard_palette,-,-,-,-,,,
57
+ 56,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,random,-,-,-,-,,,
58
+ 57,0,0,1,semantic_segmentation,-,separate,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
59
+ 58,0,0,0,semantic_segmentation,-,separate,-,original,0.5,text_labels,random,-,-,-,-,,,
60
+ 59,0,0,0,semantic_segmentation,-,separate,-,original,1,none,standard_palette,-,-,-,-,,,
61
+ 60,0,0,0,semantic_segmentation,-,separate,-,original,1,none,random,-,-,-,-,,,
62
+ 61,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,standard_palette,-,-,-,-,,,
63
+ 62,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,random,-,-,-,-,,,
64
+ 63,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
65
+ 64,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
66
+ 65,0,0,1,referring_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
67
+ 66,0,0,1,referring_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
68
+ 67,0,0,1,referring_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
69
+ 68,0,1,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill,-,-,,,
70
+ 69,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,contour,-,-,,,
71
+ 70,1,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill_contour,-,-,,,
72
+ 71,0,0,1,referring_segmentation,-,overlay,-,original,1,-,-,-,fill,-,-,,,
73
+ 72,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,contour,-,-,,,
74
+ 73,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,fill_contour,-,-,,,
75
+ 74,0,0,1,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill,-,-,,,
76
+ 75,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,contour,-,-,,,
77
+ 76,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill_contour,-,-,,,
78
+ 77,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill,-,-,,,
79
+ 78,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,contour,-,-,,,
80
+ 79,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill_contour,-,-,,,
81
+ 80,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
82
+ 81,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
83
+ 82,0,0,0,keypoint,points_only,overlay,-,-,-,-,same_color,-,-,-,-,,,
84
+ 83,0,0,1,keypoint,points_only,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
85
+ 84,0,0,0,keypoint,points_only,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
86
+ 85,0,0,0,keypoint,points_only,separate,-,-,-,-,same_color,-,-,-,-,,,
87
+ 86,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
88
+ 87,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_part,-,-,-,-,,,
89
+ 88,0,0,1,keypoint,skeleton,overlay,-,-,-,-,same_color,-,-,-,-,,,
90
+ 89,1,1,1,keypoint,skeleton,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
91
+ 90,0,0,1,keypoint,skeleton,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
92
+ 91,0,0,0,keypoint,skeleton,separate,-,-,-,-,same_color,-,-,-,-,,,
93
+ 92,0,0,1,keypoint,skeleton,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
94
+ 93,0,0,0,keypoint,skeleton,separate,-,-,-,-,color_by_part,-,-,-,-,,,
95
+ 94,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,flat_list,-,,,
96
+ 95,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,part_keyed_json,-,,,
97
+ 96,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,coco_style,-,,,
98
+ 97,1,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,plasma,,,
99
+ 98,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,turbo,,,
100
+ 99,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,gray,,,
101
+ 100,1,1,1,low_level,-,-,-,-,-,-,-,-,-,-,-,,,
102
+ 101,1,1,1,generation,-,-,-,-,-,-,-,-,-,-,-,,,
103
+ 1742,1,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,-,-,42,17,1742
104
+ 305,1,0,1,object_detection,-,-,-,-,-,-,-,-,-,-,-,5,3,0305
105
+ 7080,1,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,-,-,80,70,7080
106
+ 4649,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,-,-,46,49,4649
docs/encoding_variants_summary_vfinal2.csv ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoding_id,is_final,is_base,is_base_ablation,task,pixel_style,mask_embed,label_fmt,sub_sampling,opacity,label_overlay,color_scheme,instance_bbox,mask_style,text_format,colormap,combo_text_id,combo_image_id,combo_stem
2
+ 1,0,0,1,object_detection,box_only,overlay,-,-,-,-,-,-,-,-,-,-,-,-
3
+ 2,0,0,0,object_detection,box_only,separate,-,-,-,-,-,-,-,-,-,,,
4
+ 3,0,1,1,object_detection,box_label,overlay,-,-,-,-,-,-,-,-,-,,,
5
+ 4,0,0,1,object_detection,box_label,separate,-,-,-,-,-,-,-,-,-,,,
6
+ 5,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xyxy,-,,,
7
+ 6,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xywh,-,,,
8
+ 7,0,0,1,instance_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
9
+ 8,0,0,1,instance_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
10
+ 9,0,0,1,instance_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
11
+ 10,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,none,-,-,-,,,
12
+ 11,0,0,1,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
13
+ 12,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,none,-,-,-,,,
14
+ 13,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
15
+ 14,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
16
+ 15,0,1,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
17
+ 16,0,0,0,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
18
+ 17,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
19
+ 18,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,none,-,-,-,,,
20
+ 19,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,dashed,-,-,-,,,
21
+ 20,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,none,-,-,-,,,
22
+ 21,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,dashed,-,-,-,,,
23
+ 22,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,none,-,-,-,,,
24
+ 23,0,0,1,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
25
+ 24,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
26
+ 25,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
27
+ 26,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,none,-,-,-,,,
28
+ 27,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
29
+ 28,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,none,-,-,-,,,
30
+ 29,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
31
+ 30,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
32
+ 31,0,0,1,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
33
+ 32,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
34
+ 33,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
35
+ 34,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,none,-,-,-,,,
36
+ 35,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,dashed,-,-,-,,,
37
+ 36,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,none,-,-,-,,,
38
+ 37,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,dashed,-,-,-,,,
39
+ 38,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,none,-,-,-,,,
40
+ 39,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
41
+ 40,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
42
+ 41,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
43
+ 42,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
44
+ 43,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
45
+ 44,0,0,1,semantic_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
46
+ 45,0,0,1,semantic_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
47
+ 46,1,0,1,semantic_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
48
+ 47,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,none,standard_palette,-,-,-,-,,,
49
+ 48,0,0,0,semantic_segmentation,-,overlay,-,original,0.5,none,random,-,-,-,-,,,
50
+ 49,1,1,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
51
+ 50,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,random,-,-,-,-,,,
52
+ 51,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,standard_palette,-,-,-,-,,,
53
+ 52,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,random,-,-,-,-,,,
54
+ 53,0,0,1,semantic_segmentation,-,overlay,-,original,1,text_labels,standard_palette,-,-,-,-,,,
55
+ 54,0,0,0,semantic_segmentation,-,overlay,-,original,1,text_labels,random,-,-,-,-,,,
56
+ 55,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,standard_palette,-,-,-,-,,,
57
+ 56,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,random,-,-,-,-,,,
58
+ 57,0,0,1,semantic_segmentation,-,separate,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
59
+ 58,0,0,0,semantic_segmentation,-,separate,-,original,0.5,text_labels,random,-,-,-,-,,,
60
+ 59,0,0,0,semantic_segmentation,-,separate,-,original,1,none,standard_palette,-,-,-,-,,,
61
+ 60,0,0,0,semantic_segmentation,-,separate,-,original,1,none,random,-,-,-,-,,,
62
+ 61,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,standard_palette,-,-,-,-,,,
63
+ 62,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,random,-,-,-,-,,,
64
+ 63,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
65
+ 64,0,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
66
+ 65,0,0,1,referring_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
67
+ 66,0,0,1,referring_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
68
+ 67,0,0,1,referring_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
69
+ 68,0,1,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill,-,-,,,
70
+ 69,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,contour,-,-,,,
71
+ 70,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill_contour,-,-,,,
72
+ 71,0,0,1,referring_segmentation,-,overlay,-,original,1,-,-,-,fill,-,-,,,
73
+ 72,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,contour,-,-,,,
74
+ 73,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,fill_contour,-,-,,,
75
+ 74,0,0,1,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill,-,-,,,
76
+ 75,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,contour,-,-,,,
77
+ 76,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill_contour,-,-,,,
78
+ 77,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill,-,-,,,
79
+ 78,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,contour,-,-,,,
80
+ 79,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill_contour,-,-,,,
81
+ 80,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
82
+ 81,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
83
+ 82,0,0,0,keypoint,points_only,overlay,-,-,-,-,same_color,-,-,-,-,,,
84
+ 83,0,0,1,keypoint,points_only,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
85
+ 84,0,0,0,keypoint,points_only,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
86
+ 85,0,0,0,keypoint,points_only,separate,-,-,-,-,same_color,-,-,-,-,,,
87
+ 86,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
88
+ 87,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_part,-,-,-,-,,,
89
+ 88,0,0,1,keypoint,skeleton,overlay,-,-,-,-,same_color,-,-,-,-,,,
90
+ 89,0,1,1,keypoint,skeleton,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
91
+ 90,0,0,1,keypoint,skeleton,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
92
+ 91,0,0,0,keypoint,skeleton,separate,-,-,-,-,same_color,-,-,-,-,,,
93
+ 92,0,0,1,keypoint,skeleton,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
94
+ 93,0,0,0,keypoint,skeleton,separate,-,-,-,-,color_by_part,-,-,-,-,,,
95
+ 94,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,flat_list,-,,,
96
+ 95,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,part_keyed_json,-,,,
97
+ 96,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,coco_style,-,,,
98
+ 97,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,plasma,,,
99
+ 98,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,turbo,,,
100
+ 99,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,gray,,,
101
+ 100,0,1,1,low_level,-,-,-,-,-,-,-,-,-,-,-,,,
102
+ 101,0,1,1,generation,-,-,-,-,-,-,-,-,-,-,-,,,
103
+ 1742,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,-,-,42,17,1742
104
+ 305,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,-,-,5,3,0305
105
+ 7080,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,-,-,80,70,7080
106
+ 4649,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,-,-,46,49,4649
docs/encoding_variants_summary_vfinal3-sem-enc.csv ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ encoding_id,is_final,is_base,is_base_ablation,task,pixel_style,mask_embed,label_fmt,sub_sampling,opacity,label_overlay,color_scheme,instance_bbox,mask_style,text_format,colormap,combo_text_id,combo_image_id,combo_stem
2
+ 1,0,0,1,object_detection,box_only,overlay,-,-,-,-,-,-,-,-,-,-,-,-
3
+ 2,0,0,0,object_detection,box_only,separate,-,-,-,-,-,-,-,-,-,,,
4
+ 3,0,1,1,object_detection,box_label,overlay,-,-,-,-,-,-,-,-,-,,,
5
+ 4,0,0,1,object_detection,box_label,separate,-,-,-,-,-,-,-,-,-,,,
6
+ 5,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xyxy,-,,,
7
+ 6,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,xywh,-,,,
8
+ 7,0,0,1,instance_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
9
+ 8,0,0,1,instance_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
10
+ 9,0,0,1,instance_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
11
+ 10,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,none,-,-,-,,,
12
+ 11,0,0,1,instance_segmentation,-,overlay,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
13
+ 12,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,none,-,-,-,,,
14
+ 13,0,0,0,instance_segmentation,-,overlay,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
15
+ 14,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
16
+ 15,0,1,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
17
+ 16,0,0,0,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
18
+ 17,0,0,1,instance_segmentation,-,overlay,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
19
+ 18,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,none,-,-,-,,,
20
+ 19,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_class,dashed,-,-,-,,,
21
+ 20,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,none,-,-,-,,,
22
+ 21,0,0,0,instance_segmentation,-,overlay,-,original,1,none,color_by_instance,dashed,-,-,-,,,
23
+ 22,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,none,-,-,-,,,
24
+ 23,0,0,1,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
25
+ 24,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
26
+ 25,0,0,0,instance_segmentation,-,overlay,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
27
+ 26,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,none,-,-,-,,,
28
+ 27,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_class,dashed,-,-,-,,,
29
+ 28,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,none,-,-,-,,,
30
+ 29,0,0,0,instance_segmentation,-,separate,-,original,0.5,none,color_by_instance,dashed,-,-,-,,,
31
+ 30,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,none,-,-,-,,,
32
+ 31,0,0,1,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_class,dashed,-,-,-,,,
33
+ 32,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,none,-,-,-,,,
34
+ 33,0,0,0,instance_segmentation,-,separate,-,original,0.5,text_labels,color_by_instance,dashed,-,-,-,,,
35
+ 34,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,none,-,-,-,,,
36
+ 35,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_class,dashed,-,-,-,,,
37
+ 36,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,none,-,-,-,,,
38
+ 37,0,0,0,instance_segmentation,-,separate,-,original,1,none,color_by_instance,dashed,-,-,-,,,
39
+ 38,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,none,-,-,-,,,
40
+ 39,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_class,dashed,-,-,-,,,
41
+ 40,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,none,-,-,-,,,
42
+ 41,0,0,0,instance_segmentation,-,separate,-,original,1,text_labels,color_by_instance,dashed,-,-,-,,,
43
+ 42,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
44
+ 43,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
45
+ 44,1,0,1,semantic_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
46
+ 45,1,0,1,semantic_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
47
+ 46,1,0,1,semantic_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
48
+ 47,0,0,1,semantic_segmentation,-,overlay,-,original,0.5,none,standard_palette,-,-,-,-,,,
49
+ 48,0,0,0,semantic_segmentation,-,overlay,-,original,0.5,none,random,-,-,-,-,,,
50
+ 49,1,1,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
51
+ 50,1,0,1,semantic_segmentation,-,overlay,-,original,0.5,text_labels,random,-,-,-,-,,,
52
+ 51,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,standard_palette,-,-,-,-,,,
53
+ 52,0,0,0,semantic_segmentation,-,overlay,-,original,1,none,random,-,-,-,-,,,
54
+ 53,1,0,1,semantic_segmentation,-,overlay,-,original,1,text_labels,standard_palette,-,-,-,-,,,
55
+ 54,0,0,0,semantic_segmentation,-,overlay,-,original,1,text_labels,random,-,-,-,-,,,
56
+ 55,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,standard_palette,-,-,-,-,,,
57
+ 56,0,0,0,semantic_segmentation,-,separate,-,original,0.5,none,random,-,-,-,-,,,
58
+ 57,1,0,1,semantic_segmentation,-,separate,-,original,0.5,text_labels,standard_palette,-,-,-,-,,,
59
+ 58,0,0,0,semantic_segmentation,-,separate,-,original,0.5,text_labels,random,-,-,-,-,,,
60
+ 59,0,0,0,semantic_segmentation,-,separate,-,original,1,none,standard_palette,-,-,-,-,,,
61
+ 60,0,0,0,semantic_segmentation,-,separate,-,original,1,none,random,-,-,-,-,,,
62
+ 61,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,standard_palette,-,-,-,-,,,
63
+ 62,0,0,0,semantic_segmentation,-,separate,-,original,1,text_labels,random,-,-,-,-,,,
64
+ 63,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
65
+ 64,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
66
+ 65,0,0,1,referring_segmentation,-,overlay,-,sub_sampled,-,-,-,-,-,-,-,,,
67
+ 66,0,0,1,referring_segmentation,-,separate,-,sub_sampled,-,-,-,-,-,-,-,,,
68
+ 67,0,0,1,referring_segmentation,-,text,-,sub_sampled,-,-,-,-,-,-,-,,,
69
+ 68,0,1,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill,-,-,,,
70
+ 69,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,contour,-,-,,,
71
+ 70,0,0,1,referring_segmentation,-,overlay,-,original,0.5,-,-,-,fill_contour,-,-,,,
72
+ 71,0,0,1,referring_segmentation,-,overlay,-,original,1,-,-,-,fill,-,-,,,
73
+ 72,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,contour,-,-,,,
74
+ 73,0,0,0,referring_segmentation,-,overlay,-,original,1,-,-,-,fill_contour,-,-,,,
75
+ 74,0,0,1,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill,-,-,,,
76
+ 75,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,contour,-,-,,,
77
+ 76,0,0,0,referring_segmentation,-,separate,-,original,0.5,-,-,-,fill_contour,-,-,,,
78
+ 77,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill,-,-,,,
79
+ 78,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,contour,-,-,,,
80
+ 79,0,0,0,referring_segmentation,-,separate,-,original,1,-,-,-,fill_contour,-,-,,,
81
+ 80,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,polygon,-,,,
82
+ 81,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,rle,-,,,
83
+ 82,0,0,0,keypoint,points_only,overlay,-,-,-,-,same_color,-,-,-,-,,,
84
+ 83,0,0,1,keypoint,points_only,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
85
+ 84,0,0,0,keypoint,points_only,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
86
+ 85,0,0,0,keypoint,points_only,separate,-,-,-,-,same_color,-,-,-,-,,,
87
+ 86,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
88
+ 87,0,0,0,keypoint,points_only,separate,-,-,-,-,color_by_part,-,-,-,-,,,
89
+ 88,0,0,1,keypoint,skeleton,overlay,-,-,-,-,same_color,-,-,-,-,,,
90
+ 89,0,1,1,keypoint,skeleton,overlay,-,-,-,-,color_by_instance,-,-,-,-,,,
91
+ 90,0,0,1,keypoint,skeleton,overlay,-,-,-,-,color_by_part,-,-,-,-,,,
92
+ 91,0,0,0,keypoint,skeleton,separate,-,-,-,-,same_color,-,-,-,-,,,
93
+ 92,0,0,1,keypoint,skeleton,separate,-,-,-,-,color_by_instance,-,-,-,-,,,
94
+ 93,0,0,0,keypoint,skeleton,separate,-,-,-,-,color_by_part,-,-,-,-,,,
95
+ 94,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,flat_list,-,,,
96
+ 95,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,part_keyed_json,-,,,
97
+ 96,0,0,1,keypoint,-,-,-,-,-,-,-,-,-,coco_style,-,,,
98
+ 97,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,plasma,,,
99
+ 98,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,turbo,,,
100
+ 99,0,0,1,depth_estimation,-,-,-,-,-,-,-,-,-,-,gray,,,
101
+ 100,0,1,1,low_level,-,-,-,-,-,-,-,-,-,-,-,,,
102
+ 101,0,1,1,generation,-,-,-,-,-,-,-,-,-,-,-,,,
103
+ 1742,0,0,1,instance_segmentation,-,-,-,-,-,-,-,-,-,-,-,42,17,1742
104
+ 305,0,0,1,object_detection,-,-,-,-,-,-,-,-,-,-,-,5,3,0305
105
+ 7080,0,0,1,referring_segmentation,-,-,-,-,-,-,-,-,-,-,-,80,70,7080
106
+ 4649,1,0,1,semantic_segmentation,-,-,-,-,-,-,-,-,-,-,-,46,49,4649
docs/my_drop.xlsx ADDED
Binary file (12.4 kB). View file
 
docs/my_drop_tabs/drop_bottom_10pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,350,132,0.6238
3
+ generation_controllable,336,40,0.5642
4
+ generation_editing,302,40,0.6342
5
+ generation_inpainting_high_level,294,50,0.6495
6
+ generation_inpainting_low_level,375,80,0.5288
7
+ generation_t2i,342,342,0.6661
8
+ instance_segmentation,341,35,0.5822
9
+ keypoint,365,87,0.5512
10
+ lowlevel-deblur,258,40,0.673
11
+ lowlevel-derain,391,40,0.7534
12
+ lowlevel-desnow,386,60,0.7498
13
+ lowlevel-super-resolution,353,40,0.5536
14
+ object_detection,387,39,0.6311
15
+ referring_segmentation,389,40,0.5585
docs/my_drop_tabs/drop_bottom_15pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,343,132,0.6307
3
+ generation_controllable,305,40,0.5946
4
+ generation_editing,247,40,0.7118
5
+ generation_inpainting_high_level,246,50,0.7231
6
+ generation_inpainting_low_level,375,80,0.5288
7
+ generation_t2i,342,342,0.6661
8
+ instance_segmentation,337,35,0.5859
9
+ keypoint,348,87,0.5641
10
+ lowlevel-deblur,254,40,0.6791
11
+ lowlevel-derain,369,40,0.782
12
+ lowlevel-desnow,382,60,0.7547
13
+ lowlevel-super-resolution,288,40,0.6179
14
+ object_detection,383,39,0.6348
15
+ referring_segmentation,380,40,0.5653
docs/my_drop_tabs/drop_bottom_20pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,322,130,0.6501
3
+ generation_controllable,268,39,0.6307
4
+ generation_editing,244,40,0.7165
5
+ generation_inpainting_high_level,237,50,0.7395
6
+ generation_inpainting_low_level,306,79,0.5836
7
+ generation_t2i,293,293,0.7296
8
+ instance_segmentation,317,35,0.6025
9
+ keypoint,327,85,0.579
10
+ lowlevel-deblur,254,40,0.6791
11
+ lowlevel-derain,368,40,0.7832
12
+ lowlevel-desnow,382,60,0.7547
13
+ lowlevel-super-resolution,288,40,0.6179
14
+ object_detection,367,39,0.6483
15
+ referring_segmentation,355,39,0.5821
docs/my_drop_tabs/drop_bottom_25pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,296,125,0.6742
3
+ generation_controllable,232,39,0.6668
4
+ generation_editing,241,40,0.7204
5
+ generation_inpainting_high_level,232,50,0.7474
6
+ generation_inpainting_low_level,306,79,0.5836
7
+ generation_t2i,292,292,0.7307
8
+ instance_segmentation,281,35,0.6298
9
+ keypoint,293,84,0.6026
10
+ lowlevel-deblur,240,40,0.6968
11
+ lowlevel-derain,353,40,0.8005
12
+ lowlevel-desnow,375,60,0.7618
13
+ lowlevel-super-resolution,250,39,0.6549
14
+ object_detection,347,39,0.6633
15
+ referring_segmentation,320,39,0.6036
docs/my_drop_tabs/drop_bottom_30pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,273,120,0.6949
3
+ generation_controllable,219,39,0.681
4
+ generation_editing,205,40,0.7717
5
+ generation_inpainting_high_level,214,50,0.7742
6
+ generation_inpainting_low_level,303,79,0.5851
7
+ generation_t2i,292,292,0.7307
8
+ instance_segmentation,271,34,0.6374
9
+ keypoint,252,82,0.631
10
+ lowlevel-deblur,206,40,0.7411
11
+ lowlevel-derain,335,40,0.8205
12
+ lowlevel-desnow,350,60,0.7856
13
+ lowlevel-super-resolution,223,39,0.6823
14
+ object_detection,338,39,0.6696
15
+ referring_segmentation,306,39,0.6117
docs/my_drop_tabs/drop_bottom_35pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,273,120,0.6949
3
+ generation_controllable,219,39,0.681
4
+ generation_editing,205,40,0.7717
5
+ generation_inpainting_high_level,214,50,0.7742
6
+ generation_inpainting_low_level,203,77,0.6622
7
+ generation_t2i,254,254,0.7759
8
+ instance_segmentation,229,31,0.6698
9
+ keypoint,252,82,0.631
10
+ lowlevel-deblur,206,40,0.7411
11
+ lowlevel-derain,335,40,0.8205
12
+ lowlevel-desnow,350,60,0.7856
13
+ lowlevel-super-resolution,223,39,0.6823
14
+ object_detection,300,38,0.6958
15
+ referring_segmentation,254,39,0.6419
docs/my_drop_tabs/drop_bottom_40pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,210,115,0.7534
3
+ generation_controllable,219,39,0.681
4
+ generation_editing,205,40,0.7717
5
+ generation_inpainting_high_level,214,50,0.7742
6
+ generation_inpainting_low_level,203,77,0.6622
7
+ generation_t2i,254,254,0.7759
8
+ instance_segmentation,210,30,0.6852
9
+ keypoint,191,76,0.6728
10
+ lowlevel-deblur,169,40,0.7939
11
+ lowlevel-derain,318,40,0.8377
12
+ lowlevel-desnow,332,60,0.801
13
+ lowlevel-super-resolution,200,39,0.7032
14
+ object_detection,287,37,0.7047
15
+ referring_segmentation,234,39,0.6541
docs/my_drop_tabs/drop_bottom_5pct.csv ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task,count_questions,count_unique_image_id,accuracy
2
+ depth_estimation,365,133,0.6069
3
+ generation_controllable,352,40,0.5468
4
+ generation_editing,325,40,0.5999
5
+ generation_inpainting_high_level,380,50,0.535
6
+ generation_inpainting_low_level,400,80,0.5046
7
+ generation_t2i,382,382,0.6115
8
+ instance_segmentation,360,38,0.5625
9
+ keypoint,389,87,0.5315
10
+ lowlevel-deblur,258,40,0.673
11
+ lowlevel-derain,391,40,0.7534
12
+ lowlevel-desnow,386,60,0.7498
13
+ lowlevel-super-resolution,353,40,0.5536
14
+ object_detection,400,39,0.6174
15
+ referring_segmentation,399,40,0.5501
docs/paper/doc.md ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Judge Dataset Documentation
2
+
3
+ ## Overview
4
+
5
+ The Judge dataset evaluates how well vision-language models (VLMs) can act as judges of computer vision model outputs. Each prompt presents a VLM with one or more encoded predictions and asks it to assess quality — via pairwise comparison, ranking, or absolute scoring. The dataset covers 15 vision tasks, 53 encoding variants, and three question types.
6
+
7
+ **Dataset location:** `Icey444/Judge_questions_v2` on Hugging Face
8
+ **v2.1 split:** 5,549 items (2,855 pairwise · 2,647 scoring · 47 ranking)
9
+
10
+ ---
11
+
12
+ ## 1. Task Coverage
13
+
14
+ | Task | Category | What is judged |
15
+ |---|---|---|
16
+ | `object_detection` | Perception | Bounding boxes (label + coordinates) for specified classes |
17
+ | `instance_segmentation` | Perception | Per-instance pixel masks for specified classes |
18
+ | `semantic_segmentation` | Perception | Per-class pixel masks across all categories |
19
+ | `referring_segmentation` | Perception | Mask of the region referred to by a natural-language expression |
20
+ | `keypoint` | Perception | 17-keypoint COCO pose skeleton per person |
21
+ | `depth_estimation` | Perception | Dense monocular depth map |
22
+ | `lowlevel-deblur` | Restoration | Image deblurring result |
23
+ | `lowlevel-derain` | Restoration | Image deraining result |
24
+ | `lowlevel-desnow` | Restoration | Image desnowing result |
25
+ | `lowlevel-super-resolution` | Restoration | Image super-resolution result |
26
+ | `generation_controllable` | Generation | Controllable image generation (control condition + prompt) |
27
+ | `generation_editing` | Generation | Instruction-based image editing |
28
+ | `generation_inpainting_high_level` | Generation | High-level inpainting (semantic fill) |
29
+ | `generation_inpainting_low_level` | Generation | Low-level inpainting (seamless texture fill) |
30
+ | `generation_t2i` | Generation | Text-to-image generation |
31
+
32
+ ---
33
+
34
+ ## 2. Encoding Variants
35
+
36
+ Encoding variants determine **how a model's prediction is presented** to the judge. Each variant transforms raw annotation output (bounding boxes, masks, keypoints, etc.) into a form the VLM can read. There are two broad families: **pixel** (rendered image) and **text** (structured string). Combo encodings pair one text and one pixel form per option.
37
+
38
+ ### 2.1 Object Detection (6 variants)
39
+
40
+ | Stem | Type | Description |
41
+ |---|---|---|
42
+ | `pixel_s0_m0` | pixel | Colored bounding boxes, no label text, overlaid on original image |
43
+ | `pixel_s1_m0` | pixel | Colored boxes + label text, overlaid on original image |
44
+ | `pixel_s1_m1` | pixel | Colored boxes + label text, rendered on separate black canvas |
45
+ | `0305` | combo | `text_xyxy` coordinates + `pixel_s1_m0` box image, per option |
46
+ | `text_xyxy` | text | `{"label":…,"bbox":[x1,y1,x2,y2]}` per detection |
47
+ | `text_xywh` | text | `{"label":…,"bbox":[x,y,w,h]}` per detection |
48
+
49
+ ### 2.2 Instance Segmentation (12 variants)
50
+
51
+ **Sub-sampled pixel** (downsampled grid, each cell = most-covered instance index):
52
+
53
+ | Stem | Embed | Description |
54
+ |---|---|---|
55
+ | `pixel_ss0_m0` | overlay | Grid overlaid on original image |
56
+ | `pixel_ss0_m1` | separate | Grid on black canvas |
57
+ | `pixel_ss1_m0_o0_l0_c0_b0` | overlay | … (sub-sampled text) |
58
+
59
+ **Original-resolution pixel** (full-resolution mask rendering):
60
+
61
+ | Stem | Opacity | Label overlay | Color scheme | Bbox style |
62
+ |---|---|---|---|---|
63
+ | `pixel_ss1_m0_o0_l1_c0_b1` | 0.5 | text labels | color-by-class | dashed bbox |
64
+ | `pixel_ss1_m0_o0_l1_c1_b1` | 0.5 | text labels | color-by-instance | dashed bbox |
65
+ | `pixel_ss1_m0_o1_l1_c0_b1` | 1.0 | text labels | color-by-class | dashed bbox |
66
+ | `pixel_ss1_m1_o0_l1_c0_b1` | 0.5 | text labels | color-by-class | dashed bbox, separate canvas |
67
+ | `pixel_ss1_m0_o0_l1_c0_b0` | 0.5 | text labels | color-by-class | no bbox |
68
+ | `pixel_ss1_m0_o0_l0_c0_b1` | 0.5 | no labels | color-by-class | dashed bbox |
69
+
70
+ **Text-only:**
71
+
72
+ | Stem | Format |
73
+ |---|---|
74
+ | `text_polygon` | `{"instance_id":…,"label":…,"polygon":[[x,y],…]}` per instance |
75
+ | `text_rle` | `{"instance_id":…,"label":…,"rle":{…}}` COCO RLE per instance |
76
+ | `text_matrix` | 2D integer grid (rows × cols), each cell = instance index |
77
+
78
+ **Combo:** `1742` — polygon text + color-by-instance image, per option.
79
+
80
+ ### 2.3 Semantic Segmentation (8 variants)
81
+
82
+ **Sub-sampled pixel** (3): overlay, separate canvas, text sub-sample.
83
+
84
+ **Original-resolution pixel** (4, all opacity 0.5):
85
+
86
+ | Stem | Label overlay | Color scheme | Canvas |
87
+ |---|---|---|---|
88
+ | `pixel_ss1_m0_o0_l1_c0` | text labels | standard palette | overlay |
89
+ | `pixel_ss1_m0_o0_l1_c1` | text labels | random colors | overlay |
90
+ | `pixel_ss1_m0_o1_l1_c0` | text labels | standard palette | overlay, full opacity |
91
+ | `pixel_ss1_m1_o0_l1_c0` | text labels | standard palette | separate canvas |
92
+ | `pixel_ss1_m0_o0_l0_c0` | no labels | standard palette | overlay |
93
+
94
+ **Text-only:**
95
+
96
+ | Stem | Format |
97
+ |---|---|
98
+ | `text_polygon` | `{"label":…,"polygon":[[x,y],…]}` per segment |
99
+ | `text_matrix` | 2D integer grid, each cell = class index |
100
+
101
+ **Combo:** `4649` — sub-sample text + original-res overlay image, per option.
102
+
103
+ ### 2.4 Referring Segmentation (11 variants)
104
+
105
+ **Sub-sampled pixel** (3): overlay, separate, text sub-sample.
106
+
107
+ **Original-resolution pixel** (5):
108
+
109
+ | Stem | Mask style | Opacity | Canvas |
110
+ |---|---|---|---|
111
+ | `pixel_ss1_m0_o0_m0` | filled region | 0.5 | overlay |
112
+ | `pixel_ss1_m0_o0_m1` | contour only | 0.5 | overlay |
113
+ | `pixel_ss1_m0_o0_m2` | fill + contour | 0.5 | overlay |
114
+ | `pixel_ss1_m0_o1_m0` | filled region | 1.0 | overlay |
115
+ | `pixel_ss1_m1_o0_m0` | filled region | 0.5 | separate canvas |
116
+
117
+ **Text-only:**
118
+
119
+ | Stem | Format |
120
+ |---|---|
121
+ | `text_polygon` | `{"label":"<expression>","polygon":[[x,y],…]}` |
122
+ | `text_matrix` | 2D grid; legend maps index → referring expression |
123
+
124
+ **Combo:** `7080` — polygon text + fill+contour image, per option.
125
+
126
+ ### 2.5 Keypoint Detection (8 variants)
127
+
128
+ **Pixel:**
129
+
130
+ | Stem | Style | Color scheme | Canvas |
131
+ |---|---|---|---|
132
+ | `pixel_s0_c1_m0` | points only | color-by-instance | overlay |
133
+ | `pixel_s1_c0_m0` | skeleton | single color (green) | overlay |
134
+ | `pixel_s1_c1_m0` | skeleton | color-by-instance | overlay |
135
+ | `pixel_s1_c2_m0` | skeleton | color-by-body-part | overlay |
136
+ | `pixel_s1_c1_m1` | skeleton | color-by-instance | separate canvas |
137
+
138
+ **Text-only:**
139
+
140
+ | Stem | Format |
141
+ |---|---|
142
+ | `text_flat_list` | 34 numbers `[x0..x16, y0..y16]` per person (COCO order) |
143
+ | `text_part_keyed_json` | `{"person_id":…,"keypoints":[{"name":…,"x":…,"y":…},…]}` |
144
+ | `text_coco_style` | 51 numbers `[x,y,v]×17` per person |
145
+
146
+ All text formats include the note: *x=0.0, y=0.0 means the keypoint was not detected or is not visible.*
147
+
148
+ ### 2.6 Depth Estimation (3 variants)
149
+
150
+ Each variant is a colormap applied to the predicted depth map:
151
+
152
+ | Stem | Colormap | Semantics |
153
+ |---|---|---|
154
+ | `plasma` | Magma/plasma | Bright yellow = closest, dark purple = farthest |
155
+ | `turbo` | Turbo (rainbow) | Red = closest, blue = farthest |
156
+ | `gray` | Grayscale | Bright = closest, dark = farthest |
157
+
158
+ ### 2.7 Low-level Restoration (4 tasks × 1 variant each)
159
+
160
+ Each task has a single pixel encoding: the restored output image shown alongside the degraded input.
161
+
162
+ | Task | Input context |
163
+ |---|---|
164
+ | `lowlevel-deblur` | Blurry source image |
165
+ | `lowlevel-derain` | Rainy source image |
166
+ | `lowlevel-desnow` | Snowy source image |
167
+ | `lowlevel-super-resolution` | Low-resolution source image |
168
+
169
+ ### 2.8 Image Generation (5 tasks × 1 variant each)
170
+
171
+ Each task shows the generated output image(s) alongside the source context.
172
+
173
+ | Task | Source context shown |
174
+ |---|---|
175
+ | `generation_controllable` | Source image (control signal + reference) |
176
+ | `generation_editing` | Original image before editing |
177
+ | `generation_inpainting_high_level` | Original image with masked region |
178
+ | `generation_inpainting_low_level` | Original image with masked region |
179
+ | `generation_t2i` | No source image (text prompt only) |
180
+
181
+ ---
182
+
183
+ ## 3. Question Types
184
+
185
+ ### 3.1 Pairwise Comparison
186
+
187
+ The judge sees two options (A and B) and selects the better prediction.
188
+
189
+ **Structure:**
190
+ ```
191
+ [<image>] ← original/reference image (if available)
192
+
193
+ You are a judge to decide the quality of answers to a <task> task [based on my given image].
194
+ [Task-specific context: class(es) of interest / referring prompt / etc.]
195
+
196
+ Format of predictions: <encoding description>
197
+
198
+ Options:
199
+
200
+ A. [<image>] [text or legend]
201
+
202
+ B. [<image>] [text or legend]
203
+
204
+ <Final question>. Please answer with A or B.
205
+ ```
206
+
207
+ **Pair sampling:** Within each `(image_id, class-of-interest, error_type, prompt)` group, pairs are drawn so that no two annotations with equal `final_score` are paired. Up to 10 pairs per group per task (encoding_analysis) or 50 pairs (judge_analysis).
208
+
209
+ **Answer:** The letter corresponding to the annotation with the higher `final_score`.
210
+
211
+ **Final question phrasing** is sampled from five paraphrases to reduce positional bias:
212
+ - *Which prediction is better?*
213
+ - *Which option is a better execution of the vision task?*
214
+ - *Which option would you prefer as answer to the vision task?*
215
+ - *Which of the two is the better result?*
216
+ - *Which option better fulfills the task?*
217
+
218
+ ### 3.2 Ranking
219
+
220
+ The judge sees N options (A through E, or fewer) and ranks them best-to-worst.
221
+
222
+ **Structure:**
223
+ ```
224
+ [original image context]
225
+
226
+ You are a judge to decide the quality of answers to a <task> task.
227
+ [Task-specific context]
228
+
229
+ Format of predictions: <encoding description>
230
+
231
+ Options:
232
+
233
+ A. [<image> or text]
234
+ B. [<image> or text]
235
+ ...
236
+
237
+ Rank the predictions from best to worst. Respond with the ranking as a single string
238
+ of letters only (best first, worst last). For example, BCAED.
239
+ ```
240
+
241
+ Groups of 3–5 annotations sharing the same `(image_id, class-of-interest, error_type)` are ranked together. Used in `judge_analysis` only.
242
+
243
+ **Answer:** Letters ordered by descending `final_score`.
244
+
245
+ ### 3.3 Scoring
246
+
247
+ The judge sees a single prediction and assigns a score from 0 to 10.
248
+
249
+ **Structure:**
250
+ ```
251
+ [original image]
252
+
253
+ You are a judge to decide the quality of answers to a <task> task [based on my given image].
254
+ [Task-specific context]
255
+
256
+ Format of prediction: <encoding description>
257
+
258
+ [First image: original. Second image: encoded prediction.] ← pixel encodings
259
+ [Prediction (text): <content>] ← text encodings
260
+
261
+ Score the quality of the prediction from 0 to 10.
262
+ 0 = random guessing / worst, 10 = best possible.
263
+ Please answer with a single score from 0 to 10 only.
264
+ ```
265
+
266
+ **Answer:** The annotation's `final_score` (normalized to 0–10).
267
+
268
+ Used in `judge_analysis` only. 20 groups × 5 annotations per group × stems per task.
269
+
270
+ ---
271
+
272
+ ## 4. Prompt Construction Standards
273
+
274
+ ### 4.1 Role Framing
275
+
276
+ Every prompt begins with a judge role sentence tailored to the task:
277
+
278
+ | Task group | Intro pattern |
279
+ |---|---|
280
+ | Object detection | "You are a judge to decide the quality of answers to an object detection task. **The class(es) of interest is** {coi}." |
281
+ | Instance / semantic segmentation | Same pattern with respective task name and COI |
282
+ | Referring segmentation | "… **The prompt is** '{expression}'." (or "The prompt is the referring expression shown in the options below." if not available at prompt-level) |
283
+ | Keypoint | "… **The task is pose estimation.**" |
284
+ | Depth estimation | "… **The task is depth prediction.**" |
285
+ | Low-level restoration | Task-specific sentence describing the restoration goal |
286
+ | Generation | Task-specific sentence describing the generation goal + text prompt when available |
287
+
288
+ "based on my given image" is appended when the original image is included as a `<image>` placeholder.
289
+
290
+ ### 4.2 Format Description
291
+
292
+ After the role sentence, the prompt includes a **Format of predictions** block describing the encoding so the judge knows what it is looking at:
293
+
294
+ - **Pixel encodings:** describe the visual style (overlay/canvas, color scheme, opacity, label style).
295
+ - **Text encodings:** describe the schema (e.g., JSON structure, coordinate conventions, grid dimensions and legend).
296
+ - **Combo encodings:** each option shows its own format description inline, followed by the encoded image.
297
+ - **Generation/low-level:** no format description (the prediction is a natural image); the instruction covers the task criterion instead.
298
+
299
+ ### 4.3 Color Legends
300
+
301
+ For encodings where colors carry semantic meaning, a legend is included **per option** (not once globally), because different predictions may contain different classes or instances:
302
+
303
+ - **Object detection pixel:** legend lists each detected class and its assigned color.
304
+ - **Instance segmentation (color-by-class):** legend lists each class and color.
305
+ - **Instance segmentation (color-by-instance):** no per-instance legend (color only distinguishes people; skeleton structure is self-evident).
306
+ - **Semantic segmentation:** legend lists each class and color.
307
+ - **Keypoint (color-by-part):** legend lists each of the 17 COCO keypoint names and its color.
308
+ - **Keypoint (color-by-instance):** one sentence describing that all keypoints and links of the same person share a color; no per-person list.
309
+ - **Keypoint (same color):** "All keypoints and links use a single color (green). No color legend."
310
+ - **Depth colormaps:** the colormap semantics (which end is near/far) are described in the format block.
311
+
312
+ ### 4.4 Image Placeholder Ordering
313
+
314
+ `<image>` placeholders in the prompt correspond to `media` entries in the same order:
315
+
316
+ 1. **Original/reference image** (first, when present) — always `original_{image_id}.png`.
317
+ 2. **Option A image** (prediction rendered for annotation A).
318
+ 3. **Option B image** (prediction rendered for annotation B).
319
+
320
+ Text-only encodings include only the original image (1 image total). Generation/low-level pixel encodings that have no source image in the image index include 2 images (A and B only). Generation tasks with a retrievable source image include 3 images.
321
+
322
+ ### 4.5 Subset Labels
323
+
324
+ Each item carries a `subset` field indicating which run produced it:
325
+
326
+ | Subset | Stems used | Samples | Pairs | Scoring/Ranking |
327
+ |---|---|---|---|---|
328
+ | `encoding_analysis` | All `is_base_ablation=1` (51 regular + gen/lowlevel) | 10 per task | 10 per task | No |
329
+ | `judge_analysis` | `is_final=1` (53) | 20 per task | 50 per task | Yes (20 groups × 5 annotations) |
330
+
331
+ Both runs use `seed=42`. The v2.1 JSON is the deduplicated union (keyed on task + encoding + question_type + annotation IDs).
332
+
333
+ ---
334
+
335
+ ## 5. Data Sources
336
+
337
+ | Task group | Annotation source | Image source |
338
+ |---|---|---|
339
+ | Perception tasks | `data_json_v2/*_annotations.json` | `images_v2/*.json` → local files or HF URLs |
340
+ | Low-level restoration | `data_json_v2/lowlevel_annotations.json` | HF: `Icey444/VisualJudge_images` (prediction URLs) + `images_v2/lowlevel.json` (source URLs) |
341
+ | Generation | `data_json_v2/generation_annotations.json` | HF: `Icey444/VisualJudge_images` (prediction URLs) + `images_v2/generation_images.json` (source URLs) |
342
+
343
+ Predictions are encoded locally by task-specific encoder scripts (`src/encoders/encode_*.py`) and stored under `output/encoded_v2/`. Original images are cached as `original_{image_id}.png` in the same directory.
docs/paper/figs/fig1_model_accuracy.pdf ADDED
Binary file (30.2 kB). View file
 
docs/paper/figs/fig1_model_accuracy.png ADDED

Git LFS Details

  • SHA256: 64fff265791ba26f5e6baedfe1edce49e120f5988074045a4215e3800b792b90
  • Pointer size: 131 Bytes
  • Size of remote file: 127 kB
docs/paper/figs/fig2_task_heatmap.pdf ADDED
Binary file (56.6 kB). View file
 
docs/paper/figs/fig2_task_heatmap.png ADDED

Git LFS Details

  • SHA256: 1a2c694a8dae53427b6c19590b934cc6cfa26596ba3749b6a2c0127901ce8d9d
  • Pointer size: 131 Bytes
  • Size of remote file: 165 kB
docs/paper/figs/fig3_modality.pdf ADDED
Binary file (25.5 kB). View file
 
docs/paper/figs/fig3_modality.png ADDED

Git LFS Details

  • SHA256: f60f5cb889ed8c98d7b5f03b9c8d670b61ef33d038b60eba1b14ffdeebc761ce
  • Pointer size: 130 Bytes
  • Size of remote file: 65.2 kB
docs/paper/figs/fig4_encoding_stems.pdf ADDED
Binary file (37.1 kB). View file
 
docs/paper/figs/fig4_encoding_stems.png ADDED

Git LFS Details

  • SHA256: 7367310146ee623f16f51b353042245675a218665596827d7f50d2d3052688c0
  • Pointer size: 131 Bytes
  • Size of remote file: 273 kB
docs/paper/figs/fig5_error_type.pdf ADDED
Binary file (36.5 kB). View file
 
docs/paper/figs/fig5_error_type.png ADDED

Git LFS Details

  • SHA256: 85ee1c4c16ec3c24277e583db29ccef478840ff1afffef781892054a8af7f944
  • Pointer size: 131 Bytes
  • Size of remote file: 168 kB
docs/paper/figs/fig6_instance_complexity.pdf ADDED
Binary file (37.7 kB). View file
 
docs/paper/figs/fig6_instance_complexity.png ADDED

Git LFS Details

  • SHA256: d8ce9ad48833504b4076856bb6de382331ec47ba4a472c3d2531a053776cf993
  • Pointer size: 131 Bytes
  • Size of remote file: 121 kB
docs/paper/figs/fig_encoding_top3.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59a43c6cccea2ebcbfd401ea219033eab483d82809d0691b438593bc13016830
3
+ size 1759309