ch-min commited on
Commit
19898f1
·
verified ·
1 Parent(s): 3404d44

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. correct_filter/correct_filter_analysis.py +1583 -0
  2. correct_filter/norm_analysis.py +454 -0
  3. correct_filter/run_molmo.sh +59 -0
  4. correct_filter/run_nvila.sh +70 -0
  5. correct_filter/run_qwen.sh +59 -0
  6. exp2a_correct_filter/exp2a_correct_filter_analysis.py +1825 -0
  7. exp2a_correct_filter/run_molmo.sh +62 -0
  8. exp2a_correct_filter/run_nvila.sh +63 -0
  9. exp2a_correct_filter/run_qwen.sh +62 -0
  10. exp2a_modified/exp2a_modified_embedding_analysis.py +1228 -0
  11. exp2a_modified/results/molmo/results_summary.csv +26 -0
  12. exp2a_modified/results/molmo/similarity_2m_L19_middle.csv +7 -0
  13. exp2a_modified/results/molmo/similarity_2m_L26_late_mid.csv +7 -0
  14. exp2a_modified/results/molmo/similarity_2m_L31_late.csv +7 -0
  15. exp2a_modified/results/molmo/similarity_2m_L6_early.csv +7 -0
  16. exp2a_modified/results/molmo/similarity_400k_L13_early_mid.csv +7 -0
  17. exp2a_modified/results/molmo/similarity_400k_L19_middle.csv +7 -0
  18. exp2a_modified/results/molmo/similarity_400k_L26_late_mid.csv +7 -0
  19. exp2a_modified/results/molmo/similarity_400k_L31_late.csv +7 -0
  20. exp2a_modified/results/molmo/similarity_400k_L6_early.csv +7 -0
  21. exp2a_modified/results/molmo/similarity_800k_L13_early_mid.csv +7 -0
  22. exp2a_modified/results/molmo/similarity_800k_L26_late_mid.csv +7 -0
  23. exp2a_modified/results/molmo/similarity_800k_L31_late.csv +7 -0
  24. exp2a_modified/results/molmo/similarity_800k_L6_early.csv +7 -0
  25. exp2a_modified/results/molmo/similarity_80k_L13_early_mid.csv +7 -0
  26. exp2a_modified/results/molmo/similarity_80k_L19_middle.csv +7 -0
  27. exp2a_modified/results/molmo/similarity_80k_L26_late_mid.csv +7 -0
  28. exp2a_modified/results/molmo/similarity_80k_L31_late.csv +7 -0
  29. exp2a_modified/results/molmo/similarity_80k_L6_early.csv +7 -0
  30. exp2a_modified/results/molmo/similarity_vanilla_L13_early_mid.csv +7 -0
  31. exp2a_modified/results/molmo/similarity_vanilla_L19_middle.csv +7 -0
  32. exp2a_modified/results/molmo/similarity_vanilla_L31_late.csv +7 -0
  33. exp2a_modified/results/molmo/similarity_vanilla_L6_early.csv +7 -0
  34. exp2a_modified/results/nvila/similarity_2m_L11_early_mid.csv +7 -0
  35. exp2a_modified/results/nvila/similarity_2m_L6_early.csv +7 -0
  36. exp2a_modified/results/nvila/similarity_400k_L22_late_mid.csv +7 -0
  37. exp2a_modified/results/nvila/similarity_800k_L27_late.csv +7 -0
  38. exp2a_modified/results/nvila/similarity_80k_L11_early_mid.csv +7 -0
  39. exp2a_modified/results/nvila/similarity_80k_L17_middle.csv +7 -0
  40. exp2a_modified/results/nvila/similarity_80k_L22_late_mid.csv +7 -0
  41. exp2a_modified/results/nvila/similarity_80k_L27_late.csv +7 -0
  42. exp2a_modified/results/nvila/similarity_vanilla_L22_late_mid.csv +7 -0
  43. exp2a_modified/results/nvila/similarity_vanilla_L6_early.csv +7 -0
  44. exp2a_modified/results/qwen/results_summary.csv +26 -0
  45. exp2a_modified/results/qwen/similarity_2m_L14_early_mid.csv +7 -0
  46. exp2a_modified/results/qwen/similarity_2m_L22_middle.csv +7 -0
  47. exp2a_modified/results/qwen/similarity_2m_L29_late_mid.csv +7 -0
  48. exp2a_modified/results/qwen/similarity_2m_L35_late.csv +7 -0
  49. exp2a_modified/results/qwen/similarity_2m_L7_early.csv +7 -0
  50. exp2a_modified/results/qwen/similarity_400k_L14_early_mid.csv +7 -0
correct_filter/correct_filter_analysis.py ADDED
@@ -0,0 +1,1583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Correct Filter Analysis: Correctness-Filtered Representation Analysis
4
+
5
+ Extends the original experiment by:
6
+ - Generating model predictions to determine correctness
7
+ - Filtering samples into correct/incorrect groups with balanced sampling
8
+ - Running similarity analysis on each group separately
9
+ - Recording per-scale, per-category accuracy
10
+ - Comparing correct-only vs incorrect-only vs all to check whether
11
+ scaling effects on similarity are genuine or just accuracy-driven
12
+
13
+ Fixes applied:
14
+ - Fix 1: "Answer with only one word." appended to all prompts
15
+ - Fix 2: Synonym handling (below/beneath->under, near/nearby->close, distant->far)
16
+ - Fix 3: Overlay trajectory plots (correct+all, correct+incorrect, all three)
17
+ plus cross-scale versions for correct-only and all-samples
18
+ """
19
+
20
+ import os
21
+ import sys
22
+ import json
23
+ import argparse
24
+ import base64
25
+ import logging
26
+ import random
27
+ import re
28
+ from io import BytesIO
29
+ from collections import defaultdict
30
+ from typing import Dict, List, Tuple, Optional, Any
31
+ from abc import ABC, abstractmethod
32
+
33
+ import torch
34
+ import numpy as np
35
+ import pandas as pd
36
+ from PIL import Image
37
+ from tqdm import tqdm
38
+ import matplotlib
39
+ matplotlib.use('Agg')
40
+ import matplotlib.pyplot as plt
41
+ import seaborn as sns
42
+ from sklearn.metrics.pairwise import cosine_similarity
43
+
44
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
45
+ logger = logging.getLogger(__name__)
46
+
47
+ # ============================================================================
48
+ # Constants
49
+ # ============================================================================
50
+
51
+ CATEGORY_ORDER = ['left', 'right', 'above', 'under', 'far', 'close']
52
+
53
+ OPPOSITE_MAP = {
54
+ 'left': 'right', 'right': 'left',
55
+ 'above': 'under', 'under': 'above',
56
+ 'far': 'close', 'close': 'far',
57
+ }
58
+
59
+ # Fix 2: Synonyms for answer matching
60
+ SYNONYMS = {
61
+ 'under': ['below', 'beneath'],
62
+ 'close': ['near', 'nearby'],
63
+ 'far': ['distant'],
64
+ }
65
+
66
+ TRAJECTORY_PAIRS = {
67
+ 'hypothesis': [
68
+ ('above', 'far', 'above-far', '#d62728'),
69
+ ('under', 'close', 'under-close', '#1f77b4'),
70
+ ],
71
+ 'within_axis': [
72
+ ('left', 'right', 'left-right', '#2ca02c'),
73
+ ('above', 'under', 'above-under', '#ff7f0e'),
74
+ ('far', 'close', 'far-close', '#9467bd'),
75
+ ],
76
+ 'counter_hypothesis': [
77
+ ('above', 'close', 'above-close', '#e377c2'),
78
+ ('under', 'far', 'under-far', '#17becf'),
79
+ ],
80
+ }
81
+
82
+ # Key pairs for overlay trajectory plots (Fix 3)
83
+ KEY_PAIRS = [
84
+ ('above', 'far', 'above-far'),
85
+ ('under', 'close', 'under-close'),
86
+ ('left', 'right', 'left-right'),
87
+ ('above', 'under', 'above-under'),
88
+ ('far', 'close', 'far-close'),
89
+ ]
90
+
91
+ SCALE_COLORS = {
92
+ 'vanilla': '#1f77b4', '80k': '#ff7f0e', '400k': '#2ca02c',
93
+ '800k': '#d62728', '2m': '#9467bd', 'roborefer': '#8c564b',
94
+ }
95
+
96
+ MODEL_CONFIGS = {
97
+ 'molmo': {
98
+ 'vanilla': 'allenai/Molmo-7B-O-0924',
99
+ '80k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_80k/unshared',
100
+ '400k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_400k/unshared',
101
+ '800k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_800k/unshared',
102
+ '2m': '/data/shared/Qwen/molmo/outputs/data_scale_exp_2m/unshared',
103
+ },
104
+ 'nvila': {
105
+ 'vanilla': '/data/shared/Qwen/mydisk/NVILA-Lite-2B',
106
+ # '80k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_80K-20251108_180221',
107
+ # '400k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_400K-20251108_180221',
108
+ # '800k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_800K-20251108_180221',
109
+ # '2m': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_2M-20260205_003632',
110
+ '80k': '/data/shared/Qwen/mydisk/output/SINGLE/NVILA-Lite-2B-SINGLE_REFSPATIAL_16M-20260217_035008/checkpoint-1250',
111
+ '400k': '/data/shared/Qwen/mydisk/output/SINGLE/NVILA-Lite-2B-SINGLE_REFSPATIAL_16M-20260217_035008/checkpoint-6250',
112
+ '800k': '/data/shared/Qwen/mydisk/output/SINGLE/NVILA-Lite-2B-SINGLE_REFSPATIAL_16M-20260217_035008/checkpoint-12500',
113
+ '2m': '/data/shared/Qwen/mydisk/output/SINGLE/NVILA-Lite-2B-SINGLE_REFSPATIAL_16M-20260217_035008/checkpoint-31250',
114
+ 'roborefer': '/data/shared/Qwen/mydisk/RoboRefer_model',
115
+ },
116
+ 'qwen': {
117
+ 'vanilla': 'Qwen/Qwen2.5-VL-3B-Instruct',
118
+ '80k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_80k-20251114_120221',
119
+ '400k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_400k-20251114_120221',
120
+ '800k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_800k-20251114_120221',
121
+ '2m': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517',
122
+ },
123
+ }
124
+
125
+
126
+ # ============================================================================
127
+ # Data Loading & Modification
128
+ # ============================================================================
129
+
130
+ OBJECT_PATTERNS = [
131
+ re.compile(r'between\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
132
+ re.compile(r'of\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
133
+ re.compile(r'positions\s+of\s+(.+?)\s+and\s+(.+?)\s+interact', re.IGNORECASE),
134
+ re.compile(r'How\s+are\s+(.+?)\s+and\s+(.+?)\s+positioned', re.IGNORECASE),
135
+ re.compile(r'arrangement\s+of\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
136
+ ]
137
+
138
+
139
+ def extract_objects(question: str) -> Tuple[str, str]:
140
+ for pattern in OBJECT_PATTERNS:
141
+ m = pattern.search(question)
142
+ if m:
143
+ return m.group(1).strip(), m.group(2).strip()
144
+ raise ValueError(f"Could not extract objects from: {question}")
145
+
146
+
147
+ def modify_pairwise_sample(sample: dict) -> dict:
148
+ obj1, obj2 = extract_objects(sample['question'])
149
+ category = sample['category']
150
+
151
+ # Fix 1: Add "Answer with only one word."
152
+ if category in ['left', 'right']:
153
+ new_question = f"Is the {obj1} to the left or right of the {obj2}? Answer with only one word."
154
+ else: # above, under
155
+ new_question = f"Is the {obj1} above or under the {obj2}? Answer with only one word."
156
+
157
+ return {
158
+ 'index': sample['index'],
159
+ 'image_base64': sample['image_base64'],
160
+ 'question': new_question,
161
+ 'answer': category,
162
+ 'category': category,
163
+ }
164
+
165
+
166
+ def modify_distance_sample(sample: dict, rng: random.Random) -> dict:
167
+ category = sample['category']
168
+ answer_key = sample['answer']
169
+ options = sample['options']
170
+
171
+ target_object = options[answer_key]
172
+ candidates = [v for k, v in options.items() if k != answer_key]
173
+ reference_object = rng.choice(candidates)
174
+
175
+ # Fix 1: Add "Answer with only one word."
176
+ new_question = f"Compared to {reference_object}, is {target_object} far or close from you? Answer with only one word."
177
+
178
+ return {
179
+ 'index': sample['index'],
180
+ 'image_base64': sample['image_base64'],
181
+ 'question': new_question,
182
+ 'answer': category,
183
+ 'category': category,
184
+ }
185
+
186
+
187
+ def load_and_modify_data(tsv_path: str, seed: int = 42) -> Dict[str, List[dict]]:
188
+ """Load ALL samples (no per-category limit) to maximize data for correct/incorrect filtering."""
189
+ rng = random.Random(seed)
190
+ np.random.seed(seed)
191
+
192
+ df = pd.read_csv(tsv_path, sep='\t')
193
+
194
+ raw_grouped = defaultdict(list)
195
+ for _, row in df.iterrows():
196
+ category = row['category']
197
+ sample = {
198
+ 'index': row['index'],
199
+ 'image_base64': row['image'],
200
+ 'question': row['question'],
201
+ 'answer': row['answer'],
202
+ 'category': category,
203
+ 'options': {'A': row['A'], 'B': row['B'], 'C': row['C'], 'D': row['D']}
204
+ }
205
+ raw_grouped[category].append(sample)
206
+
207
+ modified_data = defaultdict(list)
208
+ stats = {'total': 0, 'success': 0, 'failed': 0}
209
+
210
+ for category in CATEGORY_ORDER:
211
+ samples = raw_grouped[category]
212
+ for sample in samples:
213
+ stats['total'] += 1
214
+ try:
215
+ if category in ['left', 'right', 'above', 'under']:
216
+ modified = modify_pairwise_sample(sample)
217
+ else:
218
+ modified = modify_distance_sample(sample, rng)
219
+ assert modified['answer'] == modified['category']
220
+ modified_data[category].append(modified)
221
+ stats['success'] += 1
222
+ except Exception as e:
223
+ stats['failed'] += 1
224
+ logger.warning(f" Failed to modify sample {sample['index']}: {e}")
225
+
226
+ logger.info(f"Data modification: {stats['success']}/{stats['total']} success, {stats['failed']} failed")
227
+ for cat in CATEGORY_ORDER:
228
+ if cat in modified_data:
229
+ logger.info(f" {cat}: {len(modified_data[cat])} samples")
230
+ ex = modified_data[cat][0]
231
+ logger.info(f" Example Q: {ex['question']}")
232
+ logger.info(f" Example A: {ex['answer']}")
233
+
234
+ return dict(modified_data)
235
+
236
+
237
+ def decode_base64_image(base64_str: str) -> Image.Image:
238
+ image_data = base64.b64decode(base64_str)
239
+ return Image.open(BytesIO(image_data)).convert('RGB')
240
+
241
+
242
+ # ============================================================================
243
+ # Answer Matching (Fix 2: synonym support)
244
+ # ============================================================================
245
+
246
+ def find_earliest_position(text: str, word: str) -> int:
247
+ """Find earliest position of word or any of its synonyms in text."""
248
+ positions = []
249
+ pos = text.find(word)
250
+ if pos != -1:
251
+ positions.append(pos)
252
+ for syn in SYNONYMS.get(word, []):
253
+ pos = text.find(syn)
254
+ if pos != -1:
255
+ positions.append(pos)
256
+ return min(positions) if positions else -1
257
+
258
+
259
+ def check_answer(generated_text: str, expected_category: str) -> bool:
260
+ """Check if model's generated text matches the expected category.
261
+
262
+ Uses synonym-aware matching: finds which of the two options
263
+ (expected vs opposite, including synonyms) appears first.
264
+ """
265
+ if not generated_text or not generated_text.strip():
266
+ return False
267
+
268
+ text = generated_text.strip().lower()
269
+ expected = expected_category.lower()
270
+ opposite = OPPOSITE_MAP[expected]
271
+
272
+ pos_exp = find_earliest_position(text, expected)
273
+ pos_opp = find_earliest_position(text, opposite)
274
+
275
+ if pos_exp == -1:
276
+ return False
277
+ if pos_opp == -1:
278
+ return True
279
+ return pos_exp < pos_opp
280
+
281
+
282
+ # ============================================================================
283
+ # Base Extractor (prefill-only hooks + extract_and_predict)
284
+ # ============================================================================
285
+
286
+ class BaseHiddenStateExtractor(ABC):
287
+ def __init__(self, model_path: str, device: str = 'cuda', target_layers: List[int] = None):
288
+ self.model_path = model_path
289
+ self.device = device
290
+ self.hidden_states = {}
291
+ self.hooks = []
292
+ self._load_model()
293
+ num_layers = self._get_num_layers()
294
+ if target_layers is None:
295
+ self.target_layers = list(range(num_layers))
296
+ logger.info(f"Model has {num_layers} layers. Extracting ALL layers (0..{num_layers-1})")
297
+ else:
298
+ self.target_layers = target_layers
299
+ logger.info(f"Model has {num_layers} layers. Target layers: {self.target_layers}")
300
+ self._register_hooks()
301
+
302
+ def _register_hooks(self):
303
+ for layer_idx in self.target_layers:
304
+ module = self._get_layer_module(layer_idx)
305
+ if module is not None:
306
+ hook = module.register_forward_hook(self._make_hook(layer_idx))
307
+ self.hooks.append(hook)
308
+
309
+ def _make_hook(self, layer_idx: int):
310
+ def hook_fn(module, input, output):
311
+ if isinstance(output, tuple):
312
+ hidden = output[0]
313
+ else:
314
+ hidden = output
315
+ if hidden.shape[1] > 1: # prefill only
316
+ last_token = hidden[:, -1, :].detach().cpu().float()
317
+ self.hidden_states[layer_idx] = last_token.squeeze(0)
318
+ return hook_fn
319
+
320
+ @abstractmethod
321
+ def _load_model(self): pass
322
+ @abstractmethod
323
+ def _get_num_layers(self) -> int: pass
324
+ @abstractmethod
325
+ def _get_layer_module(self, layer_idx: int): pass
326
+ @abstractmethod
327
+ def extract_and_predict(self, image: Image.Image, question: str) -> Tuple[Dict[int, torch.Tensor], str]: pass
328
+
329
+ def cleanup(self):
330
+ for hook in self.hooks:
331
+ hook.remove()
332
+ self.hooks = []
333
+ if hasattr(self, 'model'):
334
+ del self.model
335
+ if hasattr(self, 'processor'):
336
+ del self.processor
337
+ torch.cuda.empty_cache()
338
+
339
+
340
+ # ============================================================================
341
+ # Molmo Extractor
342
+ # ============================================================================
343
+
344
+ class MolmoExtractor(BaseHiddenStateExtractor):
345
+ def _load_model(self):
346
+ config_path = os.path.join(self.model_path, "config.yaml")
347
+ checkpoint_path = os.path.join(self.model_path, "model.pt")
348
+ if os.path.exists(config_path) and os.path.exists(checkpoint_path):
349
+ self._load_native_model()
350
+ self.is_native = True
351
+ else:
352
+ self._load_hf_model()
353
+ self.is_native = False
354
+
355
+ def _load_native_model(self):
356
+ from olmo.config import ModelConfig
357
+ from olmo.model import Molmo as NativeMolmoModel
358
+ from olmo.data.model_preprocessor import MultiModalPreprocessor
359
+ from olmo.data.data_formatter import DataFormatter
360
+
361
+ _original_load = torch.load
362
+ def _unsafe_load_wrapper(*args, **kwargs):
363
+ if 'weights_only' not in kwargs:
364
+ kwargs['weights_only'] = False
365
+ return _original_load(*args, **kwargs)
366
+ torch.load = _unsafe_load_wrapper
367
+
368
+ cfg = ModelConfig.load(
369
+ os.path.join(self.model_path, "config.yaml"),
370
+ key="model", validate_paths=False
371
+ )
372
+ cfg.init_device = "cpu"
373
+ self.model = NativeMolmoModel(cfg)
374
+ state_dict = torch.load(os.path.join(self.model_path, "model.pt"), map_location="cpu")
375
+ self.model.load_state_dict(state_dict)
376
+ self.model = self.model.to(self.device, dtype=torch.bfloat16).eval()
377
+ self.tokenizer = cfg.get_tokenizer()
378
+
379
+ v_cfg = cfg.vision_backbone
380
+ h, w = cfg.llm_patches_per_crop()
381
+ image_padding_mask = 2 if cfg.fix_image_padding else (1 if cfg.image_padding_embed else None)
382
+
383
+ class SafeDataFormatter(DataFormatter):
384
+ def get_system_prompt(self, style, for_inference, messages, rng=None):
385
+ if style is None:
386
+ style = "User"
387
+ return super().get_system_prompt(style, for_inference, messages, rng)
388
+
389
+ self.formatter = SafeDataFormatter(
390
+ prompt_templates=cfg.prompt_type, message_format=cfg.message_formatting,
391
+ system_prompt=cfg.system_prompt_kind, always_start_with_space=cfg.always_start_with_space,
392
+ default_inference_len=cfg.default_inference_len
393
+ )
394
+ self.preprocessor = MultiModalPreprocessor(
395
+ tokenizer=self.tokenizer, normalize=str(v_cfg.image_model_type),
396
+ crop_mode=cfg.crop_mode, max_crops=cfg.max_crops,
397
+ overlap_margins=cfg.overlap_margins, resize=v_cfg.resize_mode,
398
+ use_col_tokens=cfg.use_col_tokens, base_image_input_size=v_cfg.image_default_input_size,
399
+ image_pooling_w=cfg.image_pooling_w, image_pooling_h=cfg.image_pooling_h,
400
+ image_token_length_w=w, image_token_length_h=h,
401
+ image_patch_size=v_cfg.image_patch_size, image_padding_mask=image_padding_mask,
402
+ pad_value=cfg.pad_value, loss_token_weighting=cfg.multi_annotation_weighting,
403
+ )
404
+ logger.info(f"Loaded native Molmo from {self.model_path}")
405
+
406
+ def _load_hf_model(self):
407
+ from transformers import AutoModelForCausalLM, AutoProcessor
408
+ self.model = AutoModelForCausalLM.from_pretrained(
409
+ self.model_path, torch_dtype=torch.bfloat16,
410
+ trust_remote_code=True, device_map=self.device
411
+ ).eval()
412
+ self.processor = AutoProcessor.from_pretrained(self.model_path, trust_remote_code=True)
413
+ logger.info(f"Loaded HF Molmo from {self.model_path}")
414
+
415
+ def _get_num_layers(self) -> int:
416
+ if self.is_native:
417
+ return len(self.model.transformer.blocks)
418
+ if hasattr(self.model, 'model') and hasattr(self.model.model, 'transformer'):
419
+ return len(self.model.model.transformer.blocks)
420
+ return 32
421
+
422
+ def _get_layer_module(self, layer_idx: int):
423
+ if self.is_native:
424
+ return self.model.transformer.blocks[layer_idx]
425
+ return self.model.model.transformer.blocks[layer_idx]
426
+
427
+ def extract_and_predict(self, image, question):
428
+ self.hidden_states = {}
429
+ if self.is_native:
430
+ example = {"messages": [question], "image": image}
431
+ messages, _ = self.formatter(example, is_training=False, for_inference=True, rng=np.random)
432
+ batch = self.preprocessor(np.array(image), messages, is_training=False, require_image_features=True)
433
+ if 'input_ids' not in batch and 'input_tokens' in batch:
434
+ batch['input_ids'] = batch['input_tokens']
435
+
436
+ def to_t(x):
437
+ return torch.from_numpy(x) if isinstance(x, np.ndarray) else x
438
+
439
+ input_ids = to_t(batch['input_ids']).unsqueeze(0).to(self.device).long()
440
+ images_t = to_t(batch['images']).unsqueeze(0).to(self.device, dtype=torch.bfloat16)
441
+ image_masks = to_t(batch['image_masks']).unsqueeze(0).to(self.device, dtype=torch.bfloat16)
442
+ image_input_idx = to_t(batch['image_input_idx']).unsqueeze(0).to(self.device)
443
+
444
+ with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
445
+ gen = self.model.generate(
446
+ input_ids=input_ids, images=images_t,
447
+ image_masks=image_masks, image_input_idx=image_input_idx,
448
+ max_steps=20, beam_size=1,
449
+ )
450
+ generated_ids = gen.token_ids[0, 0]
451
+ answer = self.tokenizer.decode(generated_ids.tolist()).strip()
452
+ for eos in ['<|endoftext|>', '</s>', '<|end|>']:
453
+ answer = answer.replace(eos, '').strip()
454
+ else:
455
+ from transformers import GenerationConfig
456
+ inputs = self.processor.process(images=[image], text=question)
457
+ processed = {}
458
+ for k, v in inputs.items():
459
+ v = v.to(self.device).unsqueeze(0)
460
+ if v.dtype == torch.float32:
461
+ v = v.to(dtype=torch.bfloat16)
462
+ processed[k] = v
463
+ with torch.no_grad(), torch.autocast("cuda", dtype=torch.bfloat16):
464
+ output = self.model.generate_from_batch(
465
+ processed,
466
+ GenerationConfig(max_new_tokens=20, stop_strings="<|endoftext|>"),
467
+ tokenizer=self.processor.tokenizer,
468
+ )
469
+ input_len = processed['input_ids'].shape[1]
470
+ answer = self.processor.tokenizer.decode(output[0, input_len:], skip_special_tokens=True).strip()
471
+
472
+ return self.hidden_states.copy(), answer
473
+
474
+
475
+ # ============================================================================
476
+ # NVILA Extractor
477
+ # ============================================================================
478
+
479
+ class NVILAExtractor(BaseHiddenStateExtractor):
480
+ def _load_model(self):
481
+ original_sys_path = sys.path.copy()
482
+ sys.path = [p for p in sys.path if 'RoboRefer' not in p]
483
+ modules_to_remove = [k for k in list(sys.modules.keys()) if 'llava' in k.lower()]
484
+ removed = {m: sys.modules.pop(m) for m in modules_to_remove}
485
+ try:
486
+ import llava
487
+ from llava.media import Image as LLaVAImage
488
+ from llava import conversation as clib
489
+ except Exception as err:
490
+ sys.path = original_sys_path
491
+ for m, mod in removed.items():
492
+ sys.modules[m] = mod
493
+ raise RuntimeError(f"Failed to import llava: {err}")
494
+ sys.path = original_sys_path
495
+ self.LLaVAImage = LLaVAImage
496
+ self.clib = clib
497
+ self.model = llava.load(self.model_path, model_base=None)
498
+ self._find_llm_backbone()
499
+ logger.info(f"Loaded NVILA from {self.model_path}")
500
+
501
+ def _find_llm_backbone(self):
502
+ candidates = []
503
+ if hasattr(self.model, 'llm'):
504
+ if hasattr(self.model.llm, 'model') and hasattr(self.model.llm.model, 'layers'):
505
+ candidates.append(self.model.llm.model.layers)
506
+ if hasattr(self.model.llm, 'layers'):
507
+ candidates.append(self.model.llm.layers)
508
+ if hasattr(self.model, 'model'):
509
+ if hasattr(self.model.model, 'model') and hasattr(self.model.model.model, 'layers'):
510
+ candidates.append(self.model.model.model.layers)
511
+ if hasattr(self.model.model, 'layers'):
512
+ candidates.append(self.model.model.layers)
513
+ for name, module in self.model.named_modules():
514
+ if name.endswith('.layers') and hasattr(module, '__len__') and len(module) > 0:
515
+ candidates.append(module)
516
+ if candidates:
517
+ self.llm_backbone = candidates[0]
518
+ else:
519
+ raise ValueError("Could not locate transformer layers in NVILA model")
520
+
521
+ def _get_num_layers(self) -> int:
522
+ return len(self.llm_backbone) if hasattr(self, 'llm_backbone') else 24
523
+
524
+ def _get_layer_module(self, layer_idx: int):
525
+ return self.llm_backbone[layer_idx]
526
+
527
+ def extract_and_predict(self, image, question):
528
+ self.hidden_states = {}
529
+ import tempfile
530
+ with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as f:
531
+ temp_path = f.name
532
+ image.save(temp_path)
533
+ try:
534
+ prompt = [self.LLaVAImage(temp_path), question]
535
+ from transformers import GenerationConfig
536
+ response = self.model.generate_content(
537
+ prompt, generation_config=GenerationConfig(max_new_tokens=20, do_sample=False)
538
+ )
539
+ finally:
540
+ os.unlink(temp_path)
541
+ answer = str(response[0] if isinstance(response, list) else response).strip()
542
+ return self.hidden_states.copy(), answer
543
+
544
+
545
+ class RoboReferExtractor(NVILAExtractor):
546
+ ROBOREFER_PATH = '/data/shared/Qwen/RoboRefer'
547
+
548
+ def _load_model(self):
549
+ original_sys_path = sys.path.copy()
550
+ if self.ROBOREFER_PATH not in sys.path:
551
+ sys.path.insert(0, self.ROBOREFER_PATH)
552
+ modules_to_remove = [k for k in list(sys.modules.keys()) if 'llava' in k.lower()]
553
+ removed = {m: sys.modules.pop(m) for m in modules_to_remove}
554
+ try:
555
+ import llava
556
+ from llava.media import Image as LLaVAImage
557
+ from llava import conversation as clib
558
+ except Exception as err:
559
+ sys.path = original_sys_path
560
+ for m, mod in removed.items():
561
+ sys.modules[m] = mod
562
+ raise RuntimeError(f"Failed to import RoboRefer llava: {err}")
563
+ sys.path = original_sys_path
564
+ self.LLaVAImage = LLaVAImage
565
+ self.clib = clib
566
+ self.model = llava.load(self.model_path, model_base=None)
567
+ self._find_llm_backbone()
568
+ logger.info(f"Loaded RoboRefer from {self.model_path}")
569
+
570
+
571
+ # ============================================================================
572
+ # Qwen2.5-VL Extractor
573
+ # ============================================================================
574
+
575
+ class Qwen25VLExtractor(BaseHiddenStateExtractor):
576
+ BASE_MODEL = "Qwen/Qwen2.5-VL-3B-Instruct"
577
+
578
+ def _load_model(self):
579
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
580
+ try:
581
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
582
+ self.model_path, torch_dtype=torch.bfloat16, device_map=self.device
583
+ )
584
+ except ImportError:
585
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
586
+ self.model_path, torch_dtype=torch.bfloat16
587
+ ).to(self.device)
588
+ self.model.eval()
589
+ if self.model_path.startswith('/'):
590
+ self.processor = AutoProcessor.from_pretrained(self.BASE_MODEL)
591
+ else:
592
+ self.processor = AutoProcessor.from_pretrained(self.model_path)
593
+ logger.info(f"Loaded Qwen2.5-VL from {self.model_path}")
594
+
595
+ def _get_num_layers(self) -> int:
596
+ return len(self.model.model.layers)
597
+
598
+ def _get_layer_module(self, layer_idx: int):
599
+ return self.model.model.layers[layer_idx]
600
+
601
+ def extract_and_predict(self, image, question):
602
+ self.hidden_states = {}
603
+ messages = [{"role": "user", "content": [
604
+ {"type": "image", "image": image},
605
+ {"type": "text", "text": question}
606
+ ]}]
607
+ text = self.processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
608
+ from qwen_vl_utils import process_vision_info
609
+ image_inputs, video_inputs = process_vision_info(messages)
610
+ inputs = self.processor(
611
+ text=[text], images=image_inputs, videos=video_inputs,
612
+ padding=True, return_tensors="pt"
613
+ ).to(self.device)
614
+ with torch.no_grad():
615
+ output_ids = self.model.generate(**inputs, max_new_tokens=20, do_sample=False)
616
+ input_len = inputs['input_ids'].shape[1]
617
+ answer = self.processor.tokenizer.decode(output_ids[0, input_len:], skip_special_tokens=True).strip()
618
+ return self.hidden_states.copy(), answer
619
+
620
+
621
+ def get_extractor(model_type: str, model_path: str, scale: str = None, **kwargs):
622
+ if model_type == 'nvila' and scale == 'roborefer':
623
+ return RoboReferExtractor(model_path, **kwargs)
624
+ extractors = {'molmo': MolmoExtractor, 'nvila': NVILAExtractor, 'qwen': Qwen25VLExtractor}
625
+ return extractors[model_type](model_path, **kwargs)
626
+
627
+
628
+ # ============================================================================
629
+ # Extraction with Per-Sample Recording
630
+ # ============================================================================
631
+
632
+ def extract_all_with_predictions(
633
+ extractor: BaseHiddenStateExtractor,
634
+ data: Dict[str, List[dict]],
635
+ ) -> Dict[str, List[dict]]:
636
+ """Extract hidden states and predictions for all samples."""
637
+ sample_records = defaultdict(list)
638
+
639
+ for category in CATEGORY_ORDER:
640
+ if category not in data:
641
+ continue
642
+ samples = data[category]
643
+ logger.info(f"Processing category: {category} ({len(samples)} samples)")
644
+ success_count = 0
645
+
646
+ for sample in tqdm(samples, desc=f" {category}"):
647
+ try:
648
+ image = decode_base64_image(sample['image_base64'])
649
+ hidden_states, predicted = extractor.extract_and_predict(image, sample['question'])
650
+
651
+ is_correct = check_answer(predicted, category)
652
+ mark = "O" if is_correct else "X"
653
+ tqdm.write(f" [{mark}] #{sample['index']:<6} expected={category:<8} | predicted=\"{predicted[:80]}\"")
654
+
655
+ record = {
656
+ 'hidden_states': {},
657
+ 'is_correct': is_correct,
658
+ 'predicted': predicted,
659
+ 'index': sample['index'],
660
+ }
661
+
662
+ for layer_idx in extractor.target_layers:
663
+ if layer_idx in hidden_states:
664
+ state = hidden_states[layer_idx].numpy().flatten()
665
+ if state.size > 0:
666
+ record['hidden_states'][layer_idx] = state
667
+
668
+ if record['hidden_states']:
669
+ sample_records[category].append(record)
670
+ success_count += 1
671
+ else:
672
+ logger.warning(f" No hidden states for sample {sample['index']}")
673
+ except Exception as e:
674
+ logger.warning(f" Error processing sample {sample['index']}: {e}")
675
+ continue
676
+
677
+ correct_n = sum(1 for r in sample_records[category] if r['is_correct'])
678
+ incorrect_n = sum(1 for r in sample_records[category] if not r['is_correct'])
679
+ acc = correct_n / (correct_n + incorrect_n) * 100 if (correct_n + incorrect_n) > 0 else 0
680
+ logger.info(f" {category}: {success_count}/{len(samples)} extracted | "
681
+ f"correct={correct_n}, incorrect={incorrect_n}, accuracy={acc:.1f}%")
682
+
683
+ total_correct = sum(1 for cat in sample_records for r in sample_records[cat] if r['is_correct'])
684
+ total_all = sum(len(sample_records[cat]) for cat in sample_records)
685
+ overall_acc = total_correct / total_all * 100 if total_all > 0 else 0
686
+ logger.info(f"\n === Category Accuracy Summary ===")
687
+ for cat in CATEGORY_ORDER:
688
+ if cat in sample_records:
689
+ c = sum(1 for r in sample_records[cat] if r['is_correct'])
690
+ n = len(sample_records[cat])
691
+ a = c / n * 100 if n > 0 else 0
692
+ logger.info(f" {cat:>6s}: {c:>4d}/{n:<4d} = {a:5.1f}%")
693
+ logger.info(f" {'TOTAL':>6s}: {total_correct:>4d}/{total_all:<4d} = {overall_acc:5.1f}%")
694
+ logger.info(f" ================================\n")
695
+
696
+ return dict(sample_records)
697
+
698
+
699
+ # ============================================================================
700
+ # Balanced Sampling
701
+ # ============================================================================
702
+
703
+ def compute_balanced_size(sample_records: Dict[str, List[dict]], filter_correct: bool) -> int:
704
+ counts = []
705
+ for cat in CATEGORY_ORDER:
706
+ if cat not in sample_records:
707
+ return 0
708
+ n = sum(1 for s in sample_records[cat] if s['is_correct'] == filter_correct)
709
+ counts.append(n)
710
+
711
+ min_count = min(counts)
712
+ if min_count == 0:
713
+ return 0
714
+
715
+ balanced = (min_count // 50) * 50
716
+ if balanced == 0:
717
+ balanced = min_count
718
+ return balanced
719
+
720
+
721
+ def balanced_sample_and_average(
722
+ sample_records: Dict[str, List[dict]],
723
+ filter_correct: bool,
724
+ n_samples: int,
725
+ target_layers: List[int],
726
+ seed: int = 42,
727
+ ) -> Dict[int, Dict[str, np.ndarray]]:
728
+ rng = random.Random(seed)
729
+ result = defaultdict(dict)
730
+
731
+ for category in CATEGORY_ORDER:
732
+ filtered = [s for s in sample_records[category] if s['is_correct'] == filter_correct]
733
+ if len(filtered) < n_samples:
734
+ logger.warning(f" {category}: only {len(filtered)} samples, need {n_samples}")
735
+ continue
736
+ sampled = rng.sample(filtered, n_samples)
737
+ for layer_idx in target_layers:
738
+ vectors = [record['hidden_states'][layer_idx]
739
+ for record in sampled if layer_idx in record['hidden_states']]
740
+ if vectors:
741
+ result[layer_idx][category] = np.mean(vectors, axis=0)
742
+
743
+ return dict(result)
744
+
745
+
746
+ def compute_all_samples_reps(
747
+ sample_records: Dict[str, List[dict]],
748
+ target_layers: List[int],
749
+ ) -> Dict[int, Dict[str, np.ndarray]]:
750
+ """Compute average representations using ALL samples (no filtering)."""
751
+ result = defaultdict(dict)
752
+ for category in CATEGORY_ORDER:
753
+ records = sample_records.get(category, [])
754
+ if not records:
755
+ continue
756
+ for layer_idx in target_layers:
757
+ vectors = [r['hidden_states'][layer_idx]
758
+ for r in records if layer_idx in r['hidden_states']]
759
+ if vectors:
760
+ result[layer_idx][category] = np.mean(vectors, axis=0)
761
+ return dict(result)
762
+
763
+
764
+ # ============================================================================
765
+ # Accuracy
766
+ # ============================================================================
767
+
768
+ def compute_accuracy_stats(sample_records, scale, model_type):
769
+ stats = {'model': model_type, 'scale': scale}
770
+ total_correct, total_count = 0, 0
771
+ for cat in CATEGORY_ORDER:
772
+ records = sample_records.get(cat, [])
773
+ n = len(records)
774
+ correct = sum(1 for r in records if r['is_correct'])
775
+ stats[f'{cat}_total'] = n
776
+ stats[f'{cat}_correct'] = correct
777
+ stats[f'{cat}_accuracy'] = correct / n if n > 0 else 0.0
778
+ total_correct += correct
779
+ total_count += n
780
+ stats['overall_total'] = total_count
781
+ stats['overall_correct'] = total_correct
782
+ stats['overall_accuracy'] = total_correct / total_count if total_count > 0 else 0.0
783
+ return stats
784
+
785
+
786
+ def save_per_sample_predictions(sample_records, scale, save_path):
787
+ rows = []
788
+ for cat in CATEGORY_ORDER:
789
+ for record in sample_records.get(cat, []):
790
+ rows.append({
791
+ 'index': record['index'], 'category': cat, 'scale': scale,
792
+ 'predicted': record['predicted'], 'expected': cat,
793
+ 'is_correct': record['is_correct'],
794
+ })
795
+ pd.DataFrame(rows).to_csv(save_path, index=False)
796
+ logger.info(f"Saved {len(rows)} per-sample predictions to {save_path}")
797
+
798
+
799
+ def save_per_sample_norms(sample_records, scale, save_path):
800
+ """Save L2 norm of each sample's hidden state at each layer."""
801
+ rows = []
802
+ for cat in CATEGORY_ORDER:
803
+ for record in sample_records.get(cat, []):
804
+ row = {
805
+ 'index': record['index'],
806
+ 'category': cat,
807
+ 'scale': scale,
808
+ 'is_correct': record['is_correct'],
809
+ }
810
+ for layer_idx, state in record['hidden_states'].items():
811
+ row[f'norm_L{layer_idx}'] = float(np.linalg.norm(state))
812
+ rows.append(row)
813
+ pd.DataFrame(rows).to_csv(save_path, index=False)
814
+ logger.info(f"Saved {len(rows)} per-sample norms to {save_path}")
815
+
816
+
817
+ # ============================================================================
818
+ # Analysis Functions
819
+ # ============================================================================
820
+
821
+ def compute_similarity_matrix(representations: Dict[str, np.ndarray]) -> pd.DataFrame:
822
+ available = [c for c in CATEGORY_ORDER if c in representations]
823
+ vectors = np.array([representations[cat] for cat in available])
824
+ sim_matrix = cosine_similarity(vectors)
825
+ return pd.DataFrame(sim_matrix, index=available, columns=available)
826
+
827
+
828
+ def analyze_hypothesis(sim_df, model_name):
829
+ results = {'model': model_name}
830
+ pairs_to_check = {
831
+ 'above_far': ('above', 'far'), 'under_close': ('under', 'close'),
832
+ 'left_right': ('left', 'right'),
833
+ }
834
+ for pair_name, (cat1, cat2) in pairs_to_check.items():
835
+ if cat1 in sim_df.index and cat2 in sim_df.columns:
836
+ results[f'sim_{pair_name}'] = sim_df.loc[cat1, cat2]
837
+ else:
838
+ results[f'sim_{pair_name}'] = None
839
+ return results
840
+
841
+
842
+ # ============================================================================
843
+ # Visualization
844
+ # ============================================================================
845
+
846
+ def plot_similarity_heatmap(sim_df, title, save_path):
847
+ plt.figure(figsize=(10, 8))
848
+ available_order = [c for c in CATEGORY_ORDER if c in sim_df.index]
849
+ sim_df_ordered = sim_df.loc[available_order, available_order]
850
+ sns.heatmap(sim_df_ordered, annot=True, fmt='.4f', cmap='RdYlBu_r',
851
+ center=0.5, vmin=0, vmax=1, square=True, linewidths=0.5,
852
+ cbar_kws={'label': 'Cosine Similarity'})
853
+ plt.title(title, fontsize=14, fontweight='bold')
854
+ plt.tight_layout()
855
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
856
+ plt.close()
857
+ logger.info(f"Saved heatmap: {save_path}")
858
+
859
+
860
+ def _extract_pair_trajectory(all_layer_sims, cat1, cat2):
861
+ layers = sorted(all_layer_sims.keys())
862
+ valid_layers, values = [], []
863
+ for l in layers:
864
+ df = all_layer_sims[l]
865
+ if cat1 in df.index and cat2 in df.columns:
866
+ valid_layers.append(l)
867
+ values.append(df.loc[cat1, cat2])
868
+ return valid_layers, values
869
+
870
+
871
+ def get_representative_layers(all_layers, n=5):
872
+ if len(all_layers) <= n:
873
+ return list(all_layers)
874
+ indices = np.linspace(0, len(all_layers) - 1, n, dtype=int)
875
+ return [all_layers[i] for i in indices]
876
+
877
+
878
+ def plot_similarity_trajectories(all_layer_sims, title, save_path):
879
+ fig, axes = plt.subplots(1, 2, figsize=(20, 7))
880
+
881
+ ax = axes[0]
882
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['hypothesis']:
883
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
884
+ ax.plot(layers, vals, '-', color=color, label=label, linewidth=2.5)
885
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['within_axis']:
886
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
887
+ ax.plot(layers, vals, '--', color=color, label=label, linewidth=1.8)
888
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['counter_hypothesis']:
889
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
890
+ ax.plot(layers, vals, ':', color=color, label=label, linewidth=1.5, alpha=0.8)
891
+ ax.set_xlabel('Layer Index')
892
+ ax.set_ylabel('Cosine Similarity')
893
+ ax.set_title(f'{title}\nPairwise Similarity Across Layers')
894
+ ax.legend(fontsize=9, loc='best')
895
+ ax.grid(True, alpha=0.3)
896
+
897
+ ax = axes[1]
898
+ lr_layers, lr_vals = _extract_pair_trajectory(all_layer_sims, 'left', 'right')
899
+ lr_dict = dict(zip(lr_layers, lr_vals))
900
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['hypothesis']:
901
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
902
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
903
+ ax.plot(layers, diffs, '-', color=color, label=f'{label} - left-right', linewidth=2.5)
904
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['counter_hypothesis']:
905
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
906
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
907
+ ax.plot(layers, diffs, ':', color=color, label=f'{label} - left-right', linewidth=1.5, alpha=0.8)
908
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['within_axis']:
909
+ if label == 'left-right':
910
+ continue
911
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
912
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
913
+ ax.plot(layers, diffs, '--', color=color, label=f'{label} - left-right', linewidth=1.5, alpha=0.7)
914
+ ax.axhline(y=0, color='gray', linestyle='-', linewidth=1, alpha=0.5)
915
+ ax.set_xlabel('Layer Index')
916
+ ax.set_ylabel('Similarity Difference (pair - left-right)')
917
+ ax.set_title(f'{title}\nRelative to Left-Right Baseline')
918
+ ax.legend(fontsize=8, loc='best')
919
+ ax.grid(True, alpha=0.3)
920
+
921
+ plt.tight_layout()
922
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
923
+ plt.close()
924
+ logger.info(f"Saved trajectory: {save_path}")
925
+
926
+
927
+ def plot_cross_scale_trajectories(cross_scale_data, model_type, save_path):
928
+ pairs = [
929
+ ('above', 'far', 'above-far (hypothesis)'),
930
+ ('under', 'close', 'under-close (hypothesis)'),
931
+ ('left', 'right', 'left-right (control)'),
932
+ ]
933
+ fig, axes = plt.subplots(1, len(pairs), figsize=(7 * len(pairs), 6))
934
+ if len(pairs) == 1:
935
+ axes = [axes]
936
+ for idx, (cat1, cat2, label) in enumerate(pairs):
937
+ ax = axes[idx]
938
+ for scale in ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']:
939
+ if scale not in cross_scale_data:
940
+ continue
941
+ layers, vals = _extract_pair_trajectory(cross_scale_data[scale], cat1, cat2)
942
+ ax.plot(layers, vals, '-', color=SCALE_COLORS.get(scale, 'gray'), label=scale, linewidth=2)
943
+ ax.set_xlabel('Layer Index')
944
+ ax.set_ylabel('Cosine Similarity')
945
+ ax.set_title(label, fontweight='bold')
946
+ ax.legend(fontsize=10)
947
+ ax.grid(True, alpha=0.3)
948
+ fig.suptitle(f'{model_type.upper()} - Similarity Trajectory Across Scales',
949
+ fontsize=15, fontweight='bold', y=1.02)
950
+ plt.tight_layout()
951
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
952
+ plt.close()
953
+ logger.info(f"Saved cross-scale trajectory: {save_path}")
954
+
955
+
956
+ def plot_similarity_evolution_heatmap(cross_scale_data, model_type, save_path):
957
+ pairs = [
958
+ ('above', 'far', 'above-far'), ('under', 'close', 'under-close'),
959
+ ('left', 'right', 'left-right'), ('above', 'under', 'above-under'),
960
+ ('far', 'close', 'far-close'),
961
+ ]
962
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
963
+ available_scales = [s for s in scale_order if s in cross_scale_data]
964
+ first_scale = available_scales[0]
965
+ all_layers = sorted(cross_scale_data[first_scale].keys())
966
+
967
+ fig, axes = plt.subplots(len(pairs), 1, figsize=(max(14, len(all_layers) * 0.5), 3 * len(pairs)))
968
+ if len(pairs) == 1:
969
+ axes = [axes]
970
+ for idx, (cat1, cat2, label) in enumerate(pairs):
971
+ ax = axes[idx]
972
+ matrix = np.full((len(available_scales), len(all_layers)), np.nan)
973
+ for si, scale in enumerate(available_scales):
974
+ layer_sims = cross_scale_data[scale]
975
+ for li, layer in enumerate(all_layers):
976
+ if layer in layer_sims:
977
+ df = layer_sims[layer]
978
+ if cat1 in df.index and cat2 in df.columns:
979
+ matrix[si, li] = df.loc[cat1, cat2]
980
+ im = ax.imshow(matrix, aspect='auto', cmap='RdYlBu_r', vmin=0.5, vmax=1.0)
981
+ ax.set_yticks(range(len(available_scales)))
982
+ ax.set_yticklabels(available_scales, fontsize=10)
983
+ step = max(1, len(all_layers) // 15)
984
+ ax.set_xticks(range(0, len(all_layers), step))
985
+ ax.set_xticklabels([str(all_layers[i]) for i in range(0, len(all_layers), step)], fontsize=8)
986
+ ax.set_title(label, fontweight='bold')
987
+ ax.set_xlabel('Layer Index')
988
+ fig.colorbar(im, ax=ax, label='Cosine Similarity', shrink=0.8)
989
+ fig.suptitle(f'{model_type.upper()} - Similarity Evolution (Layer x Scale)',
990
+ fontsize=15, fontweight='bold', y=1.01)
991
+ plt.tight_layout()
992
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
993
+ plt.close()
994
+ logger.info(f"Saved evolution heatmap: {save_path}")
995
+
996
+
997
+ # ============================================================================
998
+ # Fix 3: Overlay Trajectory Plots
999
+ # ============================================================================
1000
+
1001
+ def plot_overlay_trajectories(
1002
+ datasets: Dict[str, Dict[int, pd.DataFrame]],
1003
+ styles: Dict[str, Tuple[str, str, float]],
1004
+ title: str,
1005
+ save_path: str,
1006
+ ):
1007
+ """Plot overlay trajectory for multiple datasets (correct, incorrect, all).
1008
+
1009
+ datasets: {name -> {layer -> sim_df}}
1010
+ styles: {name -> (linestyle, color, linewidth)}
1011
+ """
1012
+ n_pairs = len(KEY_PAIRS)
1013
+ fig, axes = plt.subplots(1, n_pairs, figsize=(5.5 * n_pairs, 5.5))
1014
+ if n_pairs == 1:
1015
+ axes = [axes]
1016
+
1017
+ for idx, (cat1, cat2, label) in enumerate(KEY_PAIRS):
1018
+ ax = axes[idx]
1019
+ for name, layer_sims in datasets.items():
1020
+ ls, color, lw = styles[name]
1021
+ layers, vals = _extract_pair_trajectory(layer_sims, cat1, cat2)
1022
+ if layers:
1023
+ ax.plot(layers, vals, linestyle=ls, color=color, label=name, linewidth=lw)
1024
+ ax.set_xlabel('Layer Index', fontsize=10)
1025
+ ax.set_ylabel('Cosine Similarity', fontsize=10)
1026
+ ax.set_title(label, fontsize=11, fontweight='bold')
1027
+ ax.legend(fontsize=8)
1028
+ ax.grid(True, alpha=0.3)
1029
+
1030
+ fig.suptitle(title, fontsize=14, fontweight='bold', y=1.02)
1031
+ plt.tight_layout()
1032
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1033
+ plt.close()
1034
+ logger.info(f"Saved overlay trajectory: {save_path}")
1035
+
1036
+
1037
+ def generate_overlay_plots(
1038
+ correct_sims, incorrect_sims, all_sims,
1039
+ scale, model_type, save_dir,
1040
+ ):
1041
+ """Generate all 3 overlay trajectory variants for a single scale."""
1042
+ prefix = f'{model_type.upper()} ({scale})'
1043
+
1044
+ # 1. correct + all
1045
+ if correct_sims and all_sims:
1046
+ plot_overlay_trajectories(
1047
+ {'correct': correct_sims, 'all': all_sims},
1048
+ {'correct': ('-', '#2ca02c', 2.5), 'all': ('--', '#7f7f7f', 1.8)},
1049
+ f'{prefix} - Correct vs All Samples',
1050
+ os.path.join(save_dir, f'overlay_correct_all_{scale}.png'),
1051
+ )
1052
+
1053
+ # 2. correct + incorrect
1054
+ if correct_sims and incorrect_sims:
1055
+ plot_overlay_trajectories(
1056
+ {'correct': correct_sims, 'incorrect': incorrect_sims},
1057
+ {'correct': ('-', '#2ca02c', 2.5), 'incorrect': ('-', '#d62728', 2.5)},
1058
+ f'{prefix} - Correct vs Incorrect',
1059
+ os.path.join(save_dir, f'overlay_correct_incorrect_{scale}.png'),
1060
+ )
1061
+
1062
+ # 3. correct + incorrect + all
1063
+ if correct_sims and all_sims:
1064
+ ds = {'correct': correct_sims, 'all': all_sims}
1065
+ st = {'correct': ('-', '#2ca02c', 2.5), 'all': ('--', '#7f7f7f', 1.8)}
1066
+ if incorrect_sims:
1067
+ ds['incorrect'] = incorrect_sims
1068
+ st['incorrect'] = ('-', '#d62728', 2.0)
1069
+ plot_overlay_trajectories(
1070
+ ds, st,
1071
+ f'{prefix} - Correct vs Incorrect vs All',
1072
+ os.path.join(save_dir, f'overlay_all_{scale}.png'),
1073
+ )
1074
+
1075
+
1076
+ # ============================================================================
1077
+ # Accuracy & Ablation Visualization
1078
+ # ============================================================================
1079
+
1080
+ def plot_accuracy_chart(accuracy_records, model_type, save_path):
1081
+ fig, ax = plt.subplots(figsize=(14, 6))
1082
+ scales = [r['scale'] for r in accuracy_records]
1083
+ x = np.arange(len(CATEGORY_ORDER) + 1)
1084
+ width = 0.8 / len(scales)
1085
+ for i, record in enumerate(accuracy_records):
1086
+ values = [record.get(f'{cat}_accuracy', 0) for cat in CATEGORY_ORDER]
1087
+ values.append(record.get('overall_accuracy', 0))
1088
+ offset = (i - len(scales) / 2 + 0.5) * width
1089
+ color = SCALE_COLORS.get(record['scale'], 'gray')
1090
+ bars = ax.bar(x + offset, values, width, label=record['scale'], color=color)
1091
+ for bar, val in zip(bars, values):
1092
+ if val > 0:
1093
+ ax.annotate(f'{val:.0%}', xy=(bar.get_x() + bar.get_width() / 2, bar.get_height()),
1094
+ xytext=(0, 2), textcoords='offset points',
1095
+ ha='center', va='bottom', fontsize=6, rotation=90)
1096
+ ax.set_ylabel('Accuracy')
1097
+ ax.set_title(f'{model_type.upper()} - Per-Category Accuracy Across Scales', fontweight='bold')
1098
+ ax.set_xticks(x)
1099
+ ax.set_xticklabels(CATEGORY_ORDER + ['overall'])
1100
+ ax.legend(fontsize=9)
1101
+ ax.set_ylim(0, 1.15)
1102
+ ax.axhline(y=0.5, color='gray', linestyle='--', alpha=0.5)
1103
+ plt.tight_layout()
1104
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1105
+ plt.close()
1106
+ logger.info(f"Saved accuracy chart: {save_path}")
1107
+
1108
+
1109
+ def plot_ablation_summary(ablation_data, model_type, save_path, include_roborefer=False):
1110
+ pairs = [
1111
+ ('above', 'far', 'above-far', '#d62728'),
1112
+ ('under', 'close', 'under-close', '#1f77b4'),
1113
+ ('left', 'right', 'left-right', '#2ca02c'),
1114
+ ]
1115
+ if include_roborefer:
1116
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
1117
+ else:
1118
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m']
1119
+
1120
+ fig, axes = plt.subplots(1, 2, figsize=(18, 7))
1121
+
1122
+ ax = axes[0]
1123
+ for cat1, cat2, label, color in pairs:
1124
+ x_vals, y_correct, y_all = [], [], []
1125
+ for i, scale in enumerate(scale_order):
1126
+ entry = next((d for d in ablation_data if d['scale'] == scale), None)
1127
+ if entry is None:
1128
+ continue
1129
+ sim_c = entry.get(f'correct_{cat1}_{cat2}')
1130
+ sim_a = entry.get(f'all_{cat1}_{cat2}')
1131
+ if sim_c is not None:
1132
+ x_vals.append(i)
1133
+ y_correct.append(sim_c)
1134
+ y_all.append(sim_a)
1135
+ if x_vals:
1136
+ ax.plot(x_vals, y_correct, '-o', color=color, label=f'{label} (correct)', linewidth=2.5)
1137
+ ax.plot(x_vals, y_all, '--s', color=color, label=f'{label} (all)', linewidth=1.5, alpha=0.6)
1138
+ ax.set_xticks(range(len(scale_order)))
1139
+ ax.set_xticklabels(scale_order)
1140
+ ax.set_xlabel('Scale')
1141
+ ax.set_ylabel('Cosine Similarity')
1142
+ ax.set_title('Correct-Only vs All-Samples Similarity', fontweight='bold')
1143
+ ax.legend(fontsize=8, loc='best')
1144
+ ax.grid(True, alpha=0.3)
1145
+
1146
+ ax2 = axes[1]
1147
+ x_vals, acc_vals = [], []
1148
+ for i, scale in enumerate(scale_order):
1149
+ entry = next((d for d in ablation_data if d['scale'] == scale), None)
1150
+ if entry and 'accuracy' in entry:
1151
+ x_vals.append(i)
1152
+ acc_vals.append(entry['accuracy'])
1153
+ ax2.bar(x_vals, acc_vals, color=[SCALE_COLORS.get(scale_order[x], 'gray') for x in x_vals], alpha=0.8)
1154
+ for x, acc in zip(x_vals, acc_vals):
1155
+ ax2.annotate(f'{acc:.1%}', xy=(x, acc), xytext=(0, 5), textcoords='offset points',
1156
+ ha='center', fontsize=10, fontweight='bold')
1157
+ ax2.set_xticks(range(len(scale_order)))
1158
+ ax2.set_xticklabels(scale_order)
1159
+ ax2.set_xlabel('Scale')
1160
+ ax2.set_ylabel('Overall Accuracy')
1161
+ ax2.set_title('Model Accuracy by Scale', fontweight='bold')
1162
+ ax2.set_ylim(0, 1.15)
1163
+ ax2.grid(True, alpha=0.3, axis='y')
1164
+
1165
+ fig.suptitle(f'{model_type.upper()} - Ablation: Is Similarity Change Due to Accuracy?',
1166
+ fontsize=15, fontweight='bold', y=1.02)
1167
+ plt.tight_layout()
1168
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1169
+ plt.close()
1170
+ logger.info(f"Saved ablation summary: {save_path}")
1171
+
1172
+
1173
+ # ============================================================================
1174
+ # Process Subset & CSV I/O
1175
+ # ============================================================================
1176
+
1177
+ def process_subset(
1178
+ subset_name, all_layer_reps, target_layers, scale, model_type, output_dir, n_samples,
1179
+ ):
1180
+ """Compute similarity matrices and save outputs for one subset."""
1181
+ scale_sims = {}
1182
+ results_list = []
1183
+
1184
+ for layer_idx in sorted(all_layer_reps.keys()):
1185
+ reps = all_layer_reps[layer_idx]
1186
+ if len(reps) < 2:
1187
+ continue
1188
+ sim_df = compute_similarity_matrix(reps)
1189
+ scale_sims[layer_idx] = sim_df
1190
+ results = analyze_hypothesis(sim_df, f"{model_type}_{scale}_{subset_name}")
1191
+ results['layer_idx'] = layer_idx
1192
+ results['subset'] = subset_name
1193
+ results['scale'] = scale
1194
+ results['n_samples_per_cat'] = n_samples
1195
+ results_list.append(results)
1196
+ csv_out = os.path.join(output_dir, 'csv')
1197
+ os.makedirs(csv_out, exist_ok=True)
1198
+ sim_df.to_csv(os.path.join(csv_out, f'similarity_{scale}_L{layer_idx}.csv'))
1199
+
1200
+ if scale_sims:
1201
+ rep_layers = get_representative_layers(sorted(scale_sims.keys()))
1202
+ for layer_idx in rep_layers:
1203
+ plot_similarity_heatmap(
1204
+ scale_sims[layer_idx],
1205
+ f'{model_type.upper()} ({scale}) [{subset_name}, n={n_samples}] - Layer {layer_idx}',
1206
+ os.path.join(output_dir, f'heatmap_{scale}_L{layer_idx}.png')
1207
+ )
1208
+ plot_similarity_trajectories(
1209
+ scale_sims,
1210
+ f'{model_type.upper()} ({scale}) [{subset_name}, n={n_samples}]',
1211
+ os.path.join(output_dir, f'trajectory_{scale}.png')
1212
+ )
1213
+
1214
+ return scale_sims, results_list
1215
+
1216
+
1217
+ def _load_scale_sims_from_csvs(subset_dir, scale):
1218
+ import glob as glob_mod
1219
+ pattern = os.path.join(subset_dir, 'csv', f'similarity_{scale}_L*.csv')
1220
+ files = glob_mod.glob(pattern)
1221
+ layer_sims = {}
1222
+ for fpath in files:
1223
+ basename = os.path.basename(fpath)
1224
+ layer_str = basename.replace(f'similarity_{scale}_L', '').replace('.csv', '')
1225
+ try:
1226
+ layer_idx = int(layer_str)
1227
+ except ValueError:
1228
+ continue
1229
+ layer_sims[layer_idx] = pd.read_csv(fpath, index_col=0)
1230
+ return layer_sims
1231
+
1232
+
1233
+ # ============================================================================
1234
+ # Merge Mode
1235
+ # ============================================================================
1236
+
1237
+ def run_merge(model_type, scales, output_dir,
1238
+ correct_dir, incorrect_dir, all_dir, accuracy_dir, comparison_dir,
1239
+ write_output_dir=None):
1240
+ """Merge mode. Read from *_dir, write to write_output_dir (or same dirs if None)."""
1241
+ # Write destinations
1242
+ w_comparison = os.path.join(write_output_dir, 'comparison') if write_output_dir else comparison_dir
1243
+ w_accuracy = os.path.join(write_output_dir, 'accuracy') if write_output_dir else accuracy_dir
1244
+ if write_output_dir:
1245
+ os.makedirs(w_comparison, exist_ok=True)
1246
+ os.makedirs(w_accuracy, exist_ok=True)
1247
+
1248
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
1249
+ available_scales = [s for s in scale_order if s in scales]
1250
+
1251
+ cross_scale_correct, cross_scale_incorrect, cross_scale_all = {}, {}, {}
1252
+ for scale in available_scales:
1253
+ c_sims = _load_scale_sims_from_csvs(correct_dir, scale)
1254
+ if c_sims:
1255
+ cross_scale_correct[scale] = c_sims
1256
+ logger.info(f" Loaded correct-only CSVs for {scale}: {len(c_sims)} layers")
1257
+ i_sims = _load_scale_sims_from_csvs(incorrect_dir, scale)
1258
+ if i_sims:
1259
+ cross_scale_incorrect[scale] = i_sims
1260
+ logger.info(f" Loaded incorrect-only CSVs for {scale}: {len(i_sims)} layers")
1261
+ a_sims = _load_scale_sims_from_csvs(all_dir, scale)
1262
+ if a_sims:
1263
+ cross_scale_all[scale] = a_sims
1264
+ logger.info(f" Loaded all-samples CSVs for {scale}: {len(a_sims)} layers")
1265
+
1266
+ # Cross-scale trajectories + evolution heatmaps
1267
+ for name, data, subdir in [
1268
+ ('correct-only', cross_scale_correct, 'cross_scale_correct_only'),
1269
+ ('incorrect-only', cross_scale_incorrect, 'cross_scale_incorrect_only'),
1270
+ ('all-samples', cross_scale_all, 'cross_scale_all_samples'),
1271
+ ]:
1272
+ if len(data) > 1:
1273
+ logger.info(f"\n--- Cross-scale comparison ({name}) ---")
1274
+ plot_cross_scale_trajectories(
1275
+ data, model_type,
1276
+ os.path.join(w_comparison, f'{subdir}.png')
1277
+ )
1278
+ plot_similarity_evolution_heatmap(
1279
+ data, model_type,
1280
+ os.path.join(w_comparison, f'evolution_heatmap_{subdir.replace("cross_scale_", "")}.png')
1281
+ )
1282
+
1283
+ # Per-scale overlay plots (Fix 3)
1284
+ for scale in available_scales:
1285
+ c = cross_scale_correct.get(scale)
1286
+ i = cross_scale_incorrect.get(scale)
1287
+ a = cross_scale_all.get(scale)
1288
+ generate_overlay_plots(c, i, a, scale, model_type, w_comparison)
1289
+
1290
+ # Accuracy chart
1291
+ accuracy_records = []
1292
+ for scale in available_scales:
1293
+ acc_path = os.path.join(accuracy_dir, 'json', f'accuracy_{scale}.json')
1294
+ if os.path.exists(acc_path):
1295
+ with open(acc_path) as f:
1296
+ accuracy_records.append(json.load(f))
1297
+ if accuracy_records:
1298
+ w_acc_csv = os.path.join(w_accuracy, 'csv')
1299
+ os.makedirs(w_acc_csv, exist_ok=True)
1300
+ pd.DataFrame(accuracy_records).to_csv(os.path.join(w_acc_csv, 'accuracy_summary.csv'), index=False)
1301
+ plot_accuracy_chart(accuracy_records, model_type,
1302
+ os.path.join(w_accuracy, 'accuracy_chart.png'))
1303
+
1304
+ # Ablation summary
1305
+ ablation_data = []
1306
+ for scale in available_scales:
1307
+ abl_path = os.path.join(comparison_dir, 'json', f'ablation_{scale}.json')
1308
+ if os.path.exists(abl_path):
1309
+ with open(abl_path) as f:
1310
+ ablation_data.append(json.load(f))
1311
+ if ablation_data:
1312
+ w_comp_csv = os.path.join(w_comparison, 'csv')
1313
+ os.makedirs(w_comp_csv, exist_ok=True)
1314
+ pd.DataFrame(ablation_data).to_csv(os.path.join(w_comp_csv, 'ablation_summary.csv'), index=False)
1315
+ plot_ablation_summary(ablation_data, model_type,
1316
+ os.path.join(w_comparison, 'ablation_summary.png'),
1317
+ include_roborefer=bool(write_output_dir))
1318
+
1319
+ w_out = write_output_dir or output_dir
1320
+ logger.info(f"\n=== Merge Complete ===\nResults in: {w_out}")
1321
+
1322
+
1323
+ # ============================================================================
1324
+ # Main
1325
+ # ============================================================================
1326
+
1327
+ def main():
1328
+ parser = argparse.ArgumentParser(description='Correct Filter Analysis')
1329
+ parser.add_argument('--data_path', type=str,
1330
+ default='/data/shared/Qwen/EmbSpatial-Bench/EmbSpatial-Bench.tsv')
1331
+ parser.add_argument('--model_type', type=str, required=True, choices=['molmo', 'nvila', 'qwen'])
1332
+ parser.add_argument('--scales', type=str, nargs='+',
1333
+ default=['vanilla', '80k', '400k', '800k', '2m'])
1334
+ parser.add_argument('--output_dir', type=str,
1335
+ default='/data/shared/Qwen/experiments/correct_filter/results')
1336
+ parser.add_argument('--device', type=str, default='cuda')
1337
+ parser.add_argument('--seed', type=int, default=42)
1338
+ parser.add_argument('--merge', action='store_true')
1339
+ parser.add_argument('--merge-output-dir', type=str, default=None, dest='merge_output_dir',
1340
+ help='Override output dir for merge cross-scale plots (for NVILA dual merge)')
1341
+ parser.add_argument('--no-auto-roborefer', action='store_true', dest='no_auto_roborefer')
1342
+
1343
+ args = parser.parse_args()
1344
+
1345
+ if args.model_type == 'nvila' and 'roborefer' not in args.scales and not args.no_auto_roborefer:
1346
+ args.scales.append('roborefer')
1347
+
1348
+ np.random.seed(args.seed)
1349
+ torch.manual_seed(args.seed)
1350
+ random.seed(args.seed)
1351
+
1352
+ output_dir = os.path.join(args.output_dir, args.model_type)
1353
+ correct_dir = os.path.join(output_dir, 'correct_only')
1354
+ incorrect_dir = os.path.join(output_dir, 'incorrect_only')
1355
+ all_dir = os.path.join(output_dir, 'all_samples')
1356
+ accuracy_dir = os.path.join(output_dir, 'accuracy')
1357
+ comparison_dir = os.path.join(output_dir, 'comparison')
1358
+ for d in [correct_dir, incorrect_dir, all_dir, accuracy_dir, comparison_dir]:
1359
+ os.makedirs(d, exist_ok=True)
1360
+
1361
+ # Merge mode
1362
+ if args.merge:
1363
+ logger.info("\n=== MERGE MODE ===")
1364
+ run_merge(args.model_type, args.scales, output_dir,
1365
+ correct_dir, incorrect_dir, all_dir, accuracy_dir, comparison_dir,
1366
+ write_output_dir=args.merge_output_dir)
1367
+ return
1368
+
1369
+ # Normal mode
1370
+ logger.info("\n=== Loading & Modifying EmbSpatialBench Data (ALL samples) ===")
1371
+ data = load_and_modify_data(args.data_path, args.seed)
1372
+
1373
+ model_configs = MODEL_CONFIGS[args.model_type]
1374
+
1375
+ all_results = []
1376
+ accuracy_records = []
1377
+ cross_scale_correct = {}
1378
+ cross_scale_incorrect = {}
1379
+ cross_scale_all = {}
1380
+ ablation_data = []
1381
+
1382
+ for scale in args.scales:
1383
+ if scale not in model_configs:
1384
+ logger.warning(f"Scale {scale} not available for {args.model_type}, skipping...")
1385
+ continue
1386
+
1387
+ model_path = model_configs[scale]
1388
+ if not os.path.exists(model_path) and not model_path.startswith(('Qwen/', 'allenai/')):
1389
+ logger.warning(f"Model path not found: {model_path}, skipping...")
1390
+ continue
1391
+
1392
+ logger.info(f"\n{'='*60}")
1393
+ logger.info(f"Processing {args.model_type} - {scale}")
1394
+ logger.info(f"Model path: {model_path}")
1395
+ logger.info(f"{'='*60}")
1396
+
1397
+ try:
1398
+ extractor = get_extractor(args.model_type, model_path, scale=scale, device=args.device)
1399
+ target_layers = extractor.target_layers
1400
+
1401
+ # Phase A: Extract all samples with predictions
1402
+ logger.info("\n--- Phase A: Extracting hidden states with predictions ---")
1403
+ sample_records = extract_all_with_predictions(extractor, data)
1404
+
1405
+ acc_csv_dir = os.path.join(accuracy_dir, 'csv')
1406
+ acc_json_dir = os.path.join(accuracy_dir, 'json')
1407
+ os.makedirs(acc_csv_dir, exist_ok=True)
1408
+ os.makedirs(acc_json_dir, exist_ok=True)
1409
+
1410
+ save_per_sample_predictions(
1411
+ sample_records, scale,
1412
+ os.path.join(acc_csv_dir, f'predictions_{scale}.csv')
1413
+ )
1414
+ save_per_sample_norms(
1415
+ sample_records, scale,
1416
+ os.path.join(acc_csv_dir, f'norms_{scale}.csv')
1417
+ )
1418
+
1419
+ acc_stats = compute_accuracy_stats(sample_records, scale, args.model_type)
1420
+ accuracy_records.append(acc_stats)
1421
+ logger.info(f"\n Accuracy for {scale}: {acc_stats['overall_accuracy']:.1%}")
1422
+ for cat in CATEGORY_ORDER:
1423
+ logger.info(f" {cat}: {acc_stats[f'{cat}_correct']}/{acc_stats[f'{cat}_total']} "
1424
+ f"= {acc_stats[f'{cat}_accuracy']:.1%}")
1425
+
1426
+ # Phase B: Compute all-samples similarity for ALL layers
1427
+ logger.info("\n--- Phase B: All-samples similarity (all layers) ---")
1428
+ all_reps = compute_all_samples_reps(sample_records, target_layers)
1429
+ all_sims, all_results_sub = process_subset(
1430
+ 'all', all_reps, target_layers, scale,
1431
+ args.model_type, all_dir, sum(len(sample_records.get(c, [])) for c in CATEGORY_ORDER),
1432
+ )
1433
+ all_results.extend(all_results_sub)
1434
+ cross_scale_all[scale] = all_sims
1435
+
1436
+ # Phase C: Balanced sampling
1437
+ logger.info("\n--- Phase C: Balanced sampling ---")
1438
+ n_correct = compute_balanced_size(sample_records, filter_correct=True)
1439
+ n_incorrect = compute_balanced_size(sample_records, filter_correct=False)
1440
+ logger.info(f" Correct group: {n_correct} samples/category")
1441
+ logger.info(f" Incorrect group: {n_incorrect} samples/category")
1442
+
1443
+ # Process correct-only subset
1444
+ correct_layer_sims = {}
1445
+ if n_correct > 0:
1446
+ logger.info(f"\n--- Processing correct-only (n={n_correct}) ---")
1447
+ correct_reps = balanced_sample_and_average(
1448
+ sample_records, filter_correct=True, n_samples=n_correct,
1449
+ target_layers=target_layers, seed=args.seed,
1450
+ )
1451
+ correct_layer_sims, correct_results = process_subset(
1452
+ 'correct', correct_reps, target_layers, scale,
1453
+ args.model_type, correct_dir, n_correct,
1454
+ )
1455
+ all_results.extend(correct_results)
1456
+ cross_scale_correct[scale] = correct_layer_sims
1457
+ else:
1458
+ logger.warning(f" Skipping correct-only: no correct samples in some category")
1459
+
1460
+ # Process incorrect-only subset
1461
+ incorrect_layer_sims = {}
1462
+ if n_incorrect > 0:
1463
+ logger.info(f"\n--- Processing incorrect-only (n={n_incorrect}) ---")
1464
+ incorrect_reps = balanced_sample_and_average(
1465
+ sample_records, filter_correct=False, n_samples=n_incorrect,
1466
+ target_layers=target_layers, seed=args.seed,
1467
+ )
1468
+ incorrect_layer_sims, incorrect_results = process_subset(
1469
+ 'incorrect', incorrect_reps, target_layers, scale,
1470
+ args.model_type, incorrect_dir, n_incorrect,
1471
+ )
1472
+ all_results.extend(incorrect_results)
1473
+ cross_scale_incorrect[scale] = incorrect_layer_sims
1474
+ else:
1475
+ logger.warning(f" Skipping incorrect-only: no incorrect samples in some category")
1476
+
1477
+ # Phase D: Overlay plots (Fix 3)
1478
+ generate_overlay_plots(
1479
+ correct_layer_sims or None,
1480
+ incorrect_layer_sims or None,
1481
+ all_sims or None,
1482
+ scale, args.model_type, comparison_dir,
1483
+ )
1484
+
1485
+ # Phase E: Build ablation entry (mean similarity across ALL layers)
1486
+ ablation_entry = {
1487
+ 'scale': scale,
1488
+ 'accuracy': acc_stats['overall_accuracy'],
1489
+ 'n_correct_per_cat': n_correct,
1490
+ 'n_incorrect_per_cat': n_incorrect,
1491
+ }
1492
+
1493
+ pairs_list = TRAJECTORY_PAIRS['hypothesis'] + TRAJECTORY_PAIRS['within_axis']
1494
+
1495
+ # All-samples: mean similarity across all layers
1496
+ if all_sims:
1497
+ for cat1, cat2, _, _ in pairs_list:
1498
+ vals = [float(all_sims[l].loc[cat1, cat2])
1499
+ for l in all_sims
1500
+ if cat1 in all_sims[l].index and cat2 in all_sims[l].columns]
1501
+ if vals:
1502
+ ablation_entry[f'all_{cat1}_{cat2}'] = float(np.mean(vals))
1503
+
1504
+ # Correct-only: mean similarity across all layers
1505
+ if correct_layer_sims:
1506
+ for cat1, cat2, _, _ in pairs_list:
1507
+ vals = [float(correct_layer_sims[l].loc[cat1, cat2])
1508
+ for l in correct_layer_sims
1509
+ if cat1 in correct_layer_sims[l].index and cat2 in correct_layer_sims[l].columns]
1510
+ if vals:
1511
+ ablation_entry[f'correct_{cat1}_{cat2}'] = float(np.mean(vals))
1512
+
1513
+ # Incorrect-only: mean similarity across all layers
1514
+ if incorrect_layer_sims:
1515
+ for cat1, cat2, _, _ in pairs_list:
1516
+ vals = [float(incorrect_layer_sims[l].loc[cat1, cat2])
1517
+ for l in incorrect_layer_sims
1518
+ if cat1 in incorrect_layer_sims[l].index and cat2 in incorrect_layer_sims[l].columns]
1519
+ if vals:
1520
+ ablation_entry[f'incorrect_{cat1}_{cat2}'] = float(np.mean(vals))
1521
+
1522
+ ablation_data.append(ablation_entry)
1523
+
1524
+ # Save per-scale JSONs
1525
+ comp_json_dir = os.path.join(comparison_dir, 'json')
1526
+ os.makedirs(comp_json_dir, exist_ok=True)
1527
+ with open(os.path.join(comp_json_dir, f'ablation_{scale}.json'), 'w') as f:
1528
+ json.dump(ablation_entry, f, indent=2, default=str)
1529
+ with open(os.path.join(acc_json_dir, f'accuracy_{scale}.json'), 'w') as f:
1530
+ json.dump(acc_stats, f, indent=2, default=str)
1531
+
1532
+ # Cleanup
1533
+ del sample_records
1534
+ extractor.cleanup()
1535
+
1536
+ except Exception as e:
1537
+ logger.error(f"Failed to process {args.model_type} - {scale}: {e}")
1538
+ import traceback
1539
+ traceback.print_exc()
1540
+ continue
1541
+
1542
+ # Cross-scale comparisons
1543
+ for name, data, subdir in [
1544
+ ('correct-only', cross_scale_correct, 'cross_scale_correct_only'),
1545
+ ('incorrect-only', cross_scale_incorrect, 'cross_scale_incorrect_only'),
1546
+ ('all-samples', cross_scale_all, 'cross_scale_all_samples'),
1547
+ ]:
1548
+ if len(data) > 1:
1549
+ logger.info(f"\n--- Cross-scale comparison ({name}) ---")
1550
+ plot_cross_scale_trajectories(
1551
+ data, args.model_type,
1552
+ os.path.join(comparison_dir, f'{subdir}.png')
1553
+ )
1554
+ plot_similarity_evolution_heatmap(
1555
+ data, args.model_type,
1556
+ os.path.join(comparison_dir, f'evolution_heatmap_{subdir.replace("cross_scale_", "")}.png')
1557
+ )
1558
+
1559
+ if accuracy_records:
1560
+ os.makedirs(os.path.join(accuracy_dir, 'csv'), exist_ok=True)
1561
+ pd.DataFrame(accuracy_records).to_csv(os.path.join(accuracy_dir, 'csv', 'accuracy_summary.csv'), index=False)
1562
+ # accuracy_chart.png is only written in merge mode (where all scales are present).
1563
+ # Writing it here (single-scale run) would overwrite the multi-scale merge chart
1564
+ # with a single-scale version whenever any individual scale is re-run.
1565
+
1566
+ if ablation_data:
1567
+ os.makedirs(os.path.join(comparison_dir, 'csv'), exist_ok=True)
1568
+ pd.DataFrame(ablation_data).to_csv(os.path.join(comparison_dir, 'csv', 'ablation_summary.csv'), index=False)
1569
+ plot_ablation_summary(ablation_data, args.model_type,
1570
+ os.path.join(comparison_dir, 'ablation_summary.png'))
1571
+
1572
+ if all_results:
1573
+ os.makedirs(os.path.join(output_dir, 'csv'), exist_ok=True)
1574
+ pd.DataFrame(all_results).to_csv(os.path.join(output_dir, 'csv', 'results_summary.csv'), index=False)
1575
+
1576
+ logger.info(f"\n{'='*60}")
1577
+ logger.info("=== Analysis Complete ===")
1578
+ logger.info(f"Results saved to: {output_dir}")
1579
+ logger.info(f"{'='*60}")
1580
+
1581
+
1582
+ if __name__ == '__main__':
1583
+ main()
correct_filter/norm_analysis.py ADDED
@@ -0,0 +1,454 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Norm Analysis: Testing the "Neutral Zone Collapse" Hypothesis
4
+
5
+ Hypothesis: Incorrect samples have higher inter-category cosine similarity NOT
6
+ because they carry the opposite category's features, but because their spatial
7
+ feature extraction failed — causing hidden states to collapse toward a neutral
8
+ (text-bias) region with smaller norms.
9
+
10
+ Verification: Compare L2 norms of hidden states between correct and incorrect
11
+ samples per category and layer. If incorrect samples have systematically lower
12
+ norms, it supports the "collapse to neutral zone" explanation.
13
+
14
+ Reads: results/{model_type}/accuracy/norms_{scale}.csv
15
+ (produced by correct_filter_analysis.py)
16
+ Writes: results/{model_type}/norm_analysis/
17
+ """
18
+
19
+ import os
20
+ import argparse
21
+ import glob
22
+ import logging
23
+
24
+ import numpy as np
25
+ import pandas as pd
26
+ import matplotlib
27
+ matplotlib.use('Agg')
28
+ import matplotlib.pyplot as plt
29
+ import seaborn as sns
30
+ from scipy import stats
31
+
32
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
33
+ logger = logging.getLogger(__name__)
34
+
35
+ CATEGORY_ORDER = ['left', 'right', 'above', 'under', 'far', 'close']
36
+ SCALE_ORDER = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
37
+ SCALE_COLORS = {
38
+ 'vanilla': '#1f77b4', '80k': '#ff7f0e', '400k': '#2ca02c',
39
+ '800k': '#d62728', '2m': '#9467bd', 'roborefer': '#8c564b',
40
+ }
41
+
42
+
43
+ def load_norms(results_dir, model_type):
44
+ """Load all norms_{scale}.csv files for a model."""
45
+ csv_dir = os.path.join(results_dir, model_type, 'accuracy', 'csv')
46
+ all_dfs = []
47
+ for path in sorted(glob.glob(os.path.join(csv_dir, 'norms_*.csv'))):
48
+ df = pd.read_csv(path)
49
+ all_dfs.append(df)
50
+ logger.info(f"Loaded {path}: {len(df)} samples")
51
+ if not all_dfs:
52
+ raise FileNotFoundError(f"No norms_*.csv found in {csv_dir}")
53
+ return pd.concat(all_dfs, ignore_index=True)
54
+
55
+
56
+ def get_layer_columns(df):
57
+ """Extract sorted layer columns from dataframe."""
58
+ cols = [c for c in df.columns if c.startswith('norm_L')]
59
+ return sorted(cols, key=lambda c: int(c.replace('norm_L', '')))
60
+
61
+
62
+ def get_layer_index(col):
63
+ return int(col.replace('norm_L', ''))
64
+
65
+
66
+ # ============================================================================
67
+ # Analysis 1: Per-layer norm comparison (correct vs incorrect)
68
+ # ============================================================================
69
+
70
+ def compute_norm_stats(df):
71
+ """Compute mean/std/median norm for correct vs incorrect, per category × scale × layer."""
72
+ layer_cols = get_layer_columns(df)
73
+ rows = []
74
+ for scale in df['scale'].unique():
75
+ for cat in CATEGORY_ORDER:
76
+ subset = df[(df['scale'] == scale) & (df['category'] == cat)]
77
+ for is_correct in [True, False]:
78
+ group = subset[subset['is_correct'] == is_correct]
79
+ if len(group) == 0:
80
+ continue
81
+ label = 'correct' if is_correct else 'incorrect'
82
+ for col in layer_cols:
83
+ norms = group[col].dropna().values
84
+ if len(norms) == 0:
85
+ continue
86
+ rows.append({
87
+ 'scale': scale, 'category': cat, 'group': label,
88
+ 'layer': get_layer_index(col), 'n_samples': len(norms),
89
+ 'mean_norm': np.mean(norms), 'std_norm': np.std(norms),
90
+ 'median_norm': np.median(norms),
91
+ })
92
+ return pd.DataFrame(rows)
93
+
94
+
95
+ def compute_norm_ratios(norm_stats):
96
+ """Compute incorrect/correct norm ratio per category × scale × layer."""
97
+ rows = []
98
+ for (scale, cat, layer), grp in norm_stats.groupby(['scale', 'category', 'layer']):
99
+ correct = grp[grp['group'] == 'correct']
100
+ incorrect = grp[grp['group'] == 'incorrect']
101
+ if len(correct) == 0 or len(incorrect) == 0:
102
+ continue
103
+ c_mean = correct['mean_norm'].values[0]
104
+ i_mean = incorrect['mean_norm'].values[0]
105
+ if c_mean > 0:
106
+ rows.append({
107
+ 'scale': scale, 'category': cat, 'layer': layer,
108
+ 'correct_mean': c_mean, 'incorrect_mean': i_mean,
109
+ 'ratio': i_mean / c_mean,
110
+ 'diff': i_mean - c_mean,
111
+ 'n_correct': int(correct['n_samples'].values[0]),
112
+ 'n_incorrect': int(incorrect['n_samples'].values[0]),
113
+ })
114
+ return pd.DataFrame(rows)
115
+
116
+
117
+ def stat_test_norms(df, scale, layer_col):
118
+ """Mann-Whitney U test: are incorrect norms significantly different from correct?"""
119
+ rows = []
120
+ subset = df[df['scale'] == scale]
121
+ for cat in CATEGORY_ORDER:
122
+ cat_data = subset[subset['category'] == cat]
123
+ correct = cat_data[cat_data['is_correct'] == True][layer_col].dropna().values
124
+ incorrect = cat_data[cat_data['is_correct'] == False][layer_col].dropna().values
125
+ if len(correct) < 5 or len(incorrect) < 5:
126
+ continue
127
+ u_stat, p_val = stats.mannwhitneyu(correct, incorrect, alternative='two-sided')
128
+ # Effect size: rank-biserial correlation
129
+ n1, n2 = len(correct), len(incorrect)
130
+ r = 1 - (2 * u_stat) / (n1 * n2)
131
+ rows.append({
132
+ 'category': cat, 'n_correct': n1, 'n_incorrect': n2,
133
+ 'correct_mean': np.mean(correct), 'incorrect_mean': np.mean(incorrect),
134
+ 'U_stat': u_stat, 'p_value': p_val, 'effect_size_r': r,
135
+ 'significant': p_val < 0.05,
136
+ })
137
+ return pd.DataFrame(rows)
138
+
139
+
140
+ # ============================================================================
141
+ # Plots
142
+ # ============================================================================
143
+
144
+ def plot_norm_trajectory(norm_stats, scale, model_type, save_path):
145
+ """Per-scale: mean norm across layers, correct vs incorrect, per category."""
146
+ data = norm_stats[norm_stats['scale'] == scale]
147
+ if data.empty:
148
+ return
149
+
150
+ fig, axes = plt.subplots(2, 3, figsize=(18, 10), sharex=True)
151
+ fig.suptitle(f'{model_type} — {scale}: Hidden State L2 Norm by Layer\n'
152
+ f'(Solid=correct, Dashed=incorrect)', fontsize=14)
153
+
154
+ for idx, cat in enumerate(CATEGORY_ORDER):
155
+ ax = axes[idx // 3][idx % 3]
156
+ for group, style in [('correct', '-'), ('incorrect', '--')]:
157
+ subset = data[(data['category'] == cat) & (data['group'] == group)]
158
+ if subset.empty:
159
+ continue
160
+ subset = subset.sort_values('layer')
161
+ ax.plot(subset['layer'], subset['mean_norm'], style,
162
+ label=f'{group} (n={subset["n_samples"].iloc[0]})', linewidth=1.5)
163
+ ax.fill_between(
164
+ subset['layer'],
165
+ subset['mean_norm'] - subset['std_norm'],
166
+ subset['mean_norm'] + subset['std_norm'],
167
+ alpha=0.15,
168
+ )
169
+ ax.set_title(cat, fontsize=12)
170
+ ax.set_xlabel('Layer')
171
+ ax.set_ylabel('L2 Norm')
172
+ ax.legend(fontsize=8)
173
+ ax.grid(True, alpha=0.3)
174
+
175
+ plt.tight_layout()
176
+ plt.savefig(save_path, dpi=200, bbox_inches='tight')
177
+ plt.close()
178
+ logger.info(f"Saved: {save_path}")
179
+
180
+
181
+ def plot_norm_ratio_trajectory(norm_ratios, scale, model_type, save_path):
182
+ """Per-scale: incorrect/correct norm ratio across layers, all 6 categories."""
183
+ data = norm_ratios[norm_ratios['scale'] == scale].sort_values('layer')
184
+ if data.empty:
185
+ return
186
+
187
+ fig, ax = plt.subplots(figsize=(12, 6))
188
+ for cat in CATEGORY_ORDER:
189
+ subset = data[data['category'] == cat]
190
+ if subset.empty:
191
+ continue
192
+ ax.plot(subset['layer'], subset['ratio'], label=cat, linewidth=1.5)
193
+
194
+ ax.axhline(y=1.0, color='black', linestyle=':', alpha=0.5, label='ratio=1 (equal)')
195
+ ax.set_title(f'{model_type} — {scale}: Incorrect/Correct Norm Ratio by Layer', fontsize=13)
196
+ ax.set_xlabel('Layer')
197
+ ax.set_ylabel('Norm Ratio (incorrect / correct)')
198
+ ax.legend(fontsize=9)
199
+ ax.grid(True, alpha=0.3)
200
+
201
+ plt.tight_layout()
202
+ plt.savefig(save_path, dpi=200, bbox_inches='tight')
203
+ plt.close()
204
+ logger.info(f"Saved: {save_path}")
205
+
206
+
207
+ def plot_cross_scale_norm_ratio(norm_ratios, model_type, save_path):
208
+ """Cross-scale: norm ratio at representative layers, all categories averaged."""
209
+ if norm_ratios.empty:
210
+ return
211
+
212
+ layers = sorted(norm_ratios['layer'].unique())
213
+ n_layers = len(layers)
214
+ # Pick 5 representative layers
215
+ rep_indices = [0, n_layers // 4, n_layers // 2, 3 * n_layers // 4, n_layers - 1]
216
+ rep_layers = sorted(set(layers[i] for i in rep_indices))
217
+
218
+ available_scales = [s for s in SCALE_ORDER if s in norm_ratios['scale'].unique()]
219
+
220
+ fig, axes = plt.subplots(1, len(rep_layers), figsize=(4 * len(rep_layers), 5), sharey=True)
221
+ if len(rep_layers) == 1:
222
+ axes = [axes]
223
+
224
+ for ax, layer in zip(axes, rep_layers):
225
+ layer_data = norm_ratios[norm_ratios['layer'] == layer]
226
+ # Grouped bar: x=category, color=scale
227
+ x = np.arange(len(CATEGORY_ORDER))
228
+ width = 0.8 / max(len(available_scales), 1)
229
+ for si, scale in enumerate(available_scales):
230
+ vals = []
231
+ for cat in CATEGORY_ORDER:
232
+ row = layer_data[(layer_data['scale'] == scale) & (layer_data['category'] == cat)]
233
+ vals.append(row['ratio'].values[0] if len(row) > 0 else np.nan)
234
+ ax.bar(x + si * width, vals, width, label=scale,
235
+ color=SCALE_COLORS.get(scale, '#999999'), alpha=0.8)
236
+
237
+ ax.axhline(y=1.0, color='black', linestyle=':', alpha=0.5)
238
+ ax.set_title(f'Layer {layer}', fontsize=11)
239
+ ax.set_xticks(x + width * (len(available_scales) - 1) / 2)
240
+ ax.set_xticklabels(CATEGORY_ORDER, rotation=45, fontsize=9)
241
+ ax.set_ylabel('Norm Ratio (incorr / corr)' if ax == axes[0] else '')
242
+ ax.grid(True, alpha=0.2, axis='y')
243
+
244
+ axes[-1].legend(fontsize=8, bbox_to_anchor=(1.02, 1), loc='upper left')
245
+ fig.suptitle(f'{model_type}: Incorrect/Correct Norm Ratio Across Scales', fontsize=13, y=1.02)
246
+ plt.tight_layout()
247
+ plt.savefig(save_path, dpi=200, bbox_inches='tight')
248
+ plt.close()
249
+ logger.info(f"Saved: {save_path}")
250
+
251
+
252
+ def plot_overall_norm_comparison(norm_stats, model_type, save_path):
253
+ """Aggregate across categories: mean norm trajectory for correct vs incorrect, per scale."""
254
+ available_scales = [s for s in SCALE_ORDER if s in norm_stats['scale'].unique()]
255
+ if not available_scales:
256
+ return
257
+
258
+ fig, ax = plt.subplots(figsize=(12, 6))
259
+
260
+ for scale in available_scales:
261
+ color = SCALE_COLORS.get(scale, '#999999')
262
+ for group, style, alpha in [('correct', '-', 1.0), ('incorrect', '--', 0.7)]:
263
+ subset = norm_stats[(norm_stats['scale'] == scale) & (norm_stats['group'] == group)]
264
+ if subset.empty:
265
+ continue
266
+ agg = subset.groupby('layer')['mean_norm'].mean().reset_index()
267
+ agg = agg.sort_values('layer')
268
+ ax.plot(agg['layer'], agg['mean_norm'], style,
269
+ color=color, alpha=alpha, linewidth=1.5,
270
+ label=f'{scale} ({group})')
271
+
272
+ ax.set_title(f'{model_type}: Mean Norm (averaged across categories)\n'
273
+ f'Solid=correct, Dashed=incorrect', fontsize=13)
274
+ ax.set_xlabel('Layer')
275
+ ax.set_ylabel('Mean L2 Norm')
276
+ ax.legend(fontsize=8, ncol=2, bbox_to_anchor=(1.02, 1), loc='upper left')
277
+ ax.grid(True, alpha=0.3)
278
+
279
+ plt.tight_layout()
280
+ plt.savefig(save_path, dpi=200, bbox_inches='tight')
281
+ plt.close()
282
+ logger.info(f"Saved: {save_path}")
283
+
284
+
285
+ def plot_stat_test_heatmap(df_raw, model_type, out_dir):
286
+ """For each scale, run stat tests at representative layers and plot a summary heatmap."""
287
+ layer_cols = get_layer_columns(df_raw)
288
+ layers = [get_layer_index(c) for c in layer_cols]
289
+ n_layers = len(layers)
290
+ rep_indices = [0, n_layers // 4, n_layers // 2, 3 * n_layers // 4, n_layers - 1]
291
+ rep_layers = sorted(set(layers[i] for i in rep_indices))
292
+
293
+ available_scales = [s for s in SCALE_ORDER if s in df_raw['scale'].unique()]
294
+
295
+ for scale in available_scales:
296
+ all_test_rows = []
297
+ for layer in rep_layers:
298
+ col = f'norm_L{layer}'
299
+ if col not in df_raw.columns:
300
+ continue
301
+ test_df = stat_test_norms(df_raw, scale, col)
302
+ if test_df.empty:
303
+ continue
304
+ test_df['layer'] = layer
305
+ all_test_rows.append(test_df)
306
+
307
+ if not all_test_rows:
308
+ continue
309
+ test_results = pd.concat(all_test_rows, ignore_index=True)
310
+ test_results.to_csv(os.path.join(out_dir, f'stat_tests_{scale}.csv'), index=False)
311
+
312
+ # Heatmap of effect sizes
313
+ pivot = test_results.pivot_table(
314
+ index='category', columns='layer', values='effect_size_r',
315
+ )
316
+ pivot = pivot.reindex(index=CATEGORY_ORDER)
317
+
318
+ fig, ax = plt.subplots(figsize=(max(6, len(rep_layers) * 1.5), 5))
319
+ sns.heatmap(pivot, annot=True, fmt='.2f', center=0, cmap='RdBu_r',
320
+ vmin=-1, vmax=1, ax=ax, linewidths=0.5)
321
+
322
+ # Mark significant cells
323
+ for i, cat in enumerate(pivot.index):
324
+ for j, layer in enumerate(pivot.columns):
325
+ row = test_results[(test_results['category'] == cat) & (test_results['layer'] == layer)]
326
+ if len(row) > 0 and row.iloc[0]['significant']:
327
+ ax.text(j + 0.5, i + 0.85, '*', ha='center', va='center',
328
+ fontsize=14, fontweight='bold', color='black')
329
+
330
+ ax.set_title(f'{model_type} — {scale}: Norm Effect Size (rank-biserial r)\n'
331
+ f'Positive r = correct > incorrect. * = p<0.05', fontsize=11)
332
+ plt.tight_layout()
333
+ plt.savefig(os.path.join(out_dir, f'effect_size_heatmap_{scale}.png'),
334
+ dpi=200, bbox_inches='tight')
335
+ plt.close()
336
+ logger.info(f"Saved effect size heatmap: {scale}")
337
+
338
+
339
+ # ============================================================================
340
+ # Summary
341
+ # ============================================================================
342
+
343
+ def generate_summary(norm_ratios, df_raw, model_type, out_dir):
344
+ """Generate a text summary of findings."""
345
+ layer_cols = get_layer_columns(df_raw)
346
+ layers = [get_layer_index(c) for c in layer_cols]
347
+ # Use last-quarter layer as representative
348
+ rep_layer = layers[3 * len(layers) // 4]
349
+
350
+ lines = [f"=== Norm Analysis Summary: {model_type} ===", ""]
351
+ lines.append("Hypothesis: Incorrect samples collapse to a neutral zone (lower norms)")
352
+ lines.append(f"Representative layer: L{rep_layer}")
353
+ lines.append("")
354
+
355
+ available_scales = [s for s in SCALE_ORDER if s in norm_ratios['scale'].unique()]
356
+ for scale in available_scales:
357
+ data = norm_ratios[(norm_ratios['scale'] == scale) & (norm_ratios['layer'] == rep_layer)]
358
+ if data.empty:
359
+ continue
360
+ lines.append(f"--- {scale} (L{rep_layer}) ---")
361
+ n_lower = 0
362
+ for _, row in data.iterrows():
363
+ direction = "LOWER" if row['ratio'] < 1.0 else "higher"
364
+ if row['ratio'] < 1.0:
365
+ n_lower += 1
366
+ lines.append(
367
+ f" {row['category']:>6s}: ratio={row['ratio']:.3f} "
368
+ f"(correct={row['correct_mean']:.1f}, incorrect={row['incorrect_mean']:.1f}) "
369
+ f"-> incorrect is {direction}"
370
+ )
371
+ lines.append(f" => {n_lower}/{len(data)} categories have lower incorrect norms")
372
+ lines.append("")
373
+
374
+ # Stat test at rep layer
375
+ col = f'norm_L{rep_layer}'
376
+ if col in df_raw.columns:
377
+ lines.append(f"--- Statistical Tests (Mann-Whitney U) at L{rep_layer} ---")
378
+ for scale in available_scales:
379
+ test_df = stat_test_norms(df_raw, scale, col)
380
+ if test_df.empty:
381
+ continue
382
+ n_sig = test_df['significant'].sum()
383
+ lines.append(f" {scale}: {n_sig}/{len(test_df)} categories significant (p<0.05)")
384
+ for _, row in test_df.iterrows():
385
+ sig = "*" if row['significant'] else " "
386
+ lines.append(
387
+ f" {sig} {row['category']:>6s}: p={row['p_value']:.4f}, "
388
+ f"r={row['effect_size_r']:+.3f} "
389
+ f"(corr={row['correct_mean']:.1f}, incorr={row['incorrect_mean']:.1f})"
390
+ )
391
+ lines.append("")
392
+
393
+ summary_text = "\n".join(lines)
394
+ summary_path = os.path.join(out_dir, 'summary.txt')
395
+ with open(summary_path, 'w') as f:
396
+ f.write(summary_text)
397
+ logger.info(f"Saved summary: {summary_path}")
398
+ print(summary_text)
399
+
400
+
401
+ # ============================================================================
402
+ # Main
403
+ # ============================================================================
404
+
405
+ def main():
406
+ parser = argparse.ArgumentParser(description='Norm Analysis: Neutral Zone Collapse Hypothesis')
407
+ parser.add_argument('--model_type', type=str, required=True, choices=['molmo', 'nvila', 'qwen'])
408
+ parser.add_argument('--results_dir', type=str,
409
+ default='/data/shared/Qwen/experiments/correct_filter/results')
410
+ args = parser.parse_args()
411
+
412
+ out_dir = os.path.join(args.results_dir, args.model_type, 'norm_analysis')
413
+ os.makedirs(out_dir, exist_ok=True)
414
+
415
+ # Load data
416
+ logger.info(f"Loading norms for {args.model_type}...")
417
+ df = load_norms(args.results_dir, args.model_type)
418
+ logger.info(f"Total samples: {len(df)}")
419
+ logger.info(f"Scales: {sorted(df['scale'].unique())}")
420
+ logger.info(f"Correct: {df['is_correct'].sum()}, Incorrect: {(~df['is_correct']).sum()}")
421
+
422
+ # Compute stats
423
+ logger.info("\nComputing norm statistics...")
424
+ norm_stats = compute_norm_stats(df)
425
+ norm_stats.to_csv(os.path.join(out_dir, 'norm_stats.csv'), index=False)
426
+
427
+ norm_ratios = compute_norm_ratios(norm_stats)
428
+ norm_ratios.to_csv(os.path.join(out_dir, 'norm_ratios.csv'), index=False)
429
+
430
+ # Per-scale plots
431
+ available_scales = [s for s in SCALE_ORDER if s in df['scale'].unique()]
432
+ for scale in available_scales:
433
+ plot_norm_trajectory(norm_stats, scale, args.model_type,
434
+ os.path.join(out_dir, f'norm_trajectory_{scale}.png'))
435
+ plot_norm_ratio_trajectory(norm_ratios, scale, args.model_type,
436
+ os.path.join(out_dir, f'norm_ratio_{scale}.png'))
437
+
438
+ # Cross-scale plots
439
+ plot_cross_scale_norm_ratio(norm_ratios, args.model_type,
440
+ os.path.join(out_dir, 'cross_scale_norm_ratio.png'))
441
+ plot_overall_norm_comparison(norm_stats, args.model_type,
442
+ os.path.join(out_dir, 'overall_norm_comparison.png'))
443
+
444
+ # Statistical tests + effect size heatmaps
445
+ plot_stat_test_heatmap(df, args.model_type, out_dir)
446
+
447
+ # Summary
448
+ generate_summary(norm_ratios, df, args.model_type, out_dir)
449
+
450
+ logger.info(f"\n=== Norm Analysis Complete ===\nResults in: {out_dir}")
451
+
452
+
453
+ if __name__ == '__main__':
454
+ main()
correct_filter/run_molmo.sh ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ SCRIPT="/data/shared/Qwen/experiments/correct_filter/correct_filter_analysis.py"
5
+ PYTHON="conda run --no-capture-output -n molmo python"
6
+ MODEL="molmo"
7
+ LOG_DIR="/data/shared/Qwen/experiments/correct_filter/logs/${MODEL}"
8
+ mkdir -p "$LOG_DIR"
9
+
10
+ # GPU plan: all 6 scripts run simultaneously
11
+ # Molmo(25GB) shares GPU 0-4 with NVILA(8GB) = ~33GB each
12
+ SCALES=("vanilla" "80k" "400k" "800k" "2m")
13
+ GPUS=(0 1 2 3 4)
14
+
15
+ echo "========================================="
16
+ echo " Molmo Correct Filter: Launching ${#SCALES[@]} scales in parallel"
17
+ echo "========================================="
18
+
19
+ PIDS=()
20
+ for i in "${!SCALES[@]}"; do
21
+ scale="${SCALES[$i]}"
22
+ gpu="${GPUS[$i]}"
23
+ log="${LOG_DIR}/${scale}.log"
24
+
25
+ echo "[GPU $gpu] $scale -> $log"
26
+ CUDA_VISIBLE_DEVICES=$gpu $PYTHON $SCRIPT \
27
+ --model_type $MODEL \
28
+ --scales $scale \
29
+ --device cuda \
30
+ --no-auto-roborefer \
31
+ > "$log" 2>&1 &
32
+ PIDS+=($!)
33
+ done
34
+
35
+ echo ""
36
+ echo "Waiting for all ${#PIDS[@]} processes..."
37
+ FAILED=0
38
+ for i in "${!PIDS[@]}"; do
39
+ pid="${PIDS[$i]}"
40
+ scale="${SCALES[$i]}"
41
+ if wait $pid; then
42
+ echo "[DONE] $scale (PID $pid) - SUCCESS"
43
+ else
44
+ echo "[FAIL] $scale (PID $pid) - EXIT CODE $?"
45
+ FAILED=$((FAILED + 1))
46
+ fi
47
+ done
48
+
49
+ if [ $FAILED -gt 0 ]; then
50
+ echo "WARNING: $FAILED scale(s) failed. Check logs in $LOG_DIR"
51
+ fi
52
+
53
+ echo "========================================="
54
+ echo " Molmo Correct Filter: Running merge"
55
+ echo "========================================="
56
+ $PYTHON $SCRIPT --model_type $MODEL --merge --scales vanilla 80k 400k 800k 2m \
57
+ 2>&1 | tee "${LOG_DIR}/merge.log"
58
+
59
+ echo "ALL DONE: $MODEL"
correct_filter/run_nvila.sh ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ SCRIPT="/data/shared/Qwen/experiments/correct_filter/correct_filter_analysis.py"
5
+ PYTHON="conda run --no-capture-output -n vila python"
6
+ MODEL="nvila"
7
+ RESULTS_BASE="/data/shared/Qwen/experiments/correct_filter/results"
8
+ LOG_DIR="/data/shared/Qwen/experiments/correct_filter/logs/${MODEL}"
9
+ mkdir -p "$LOG_DIR"
10
+
11
+ # GPU plan: NVILA(8GB) shares GPU 0-4 with Molmo(25GB), GPU 5 with Qwen vanilla(10GB)
12
+ SCALES=("vanilla" "80k" "400k" "800k" "2m" "roborefer")
13
+ GPUS=(0 1 2 3 4 5)
14
+
15
+ echo "========================================="
16
+ echo " NVILA Correct Filter: Launching ${#SCALES[@]} scales in parallel"
17
+ echo "========================================="
18
+
19
+ PIDS=()
20
+ for i in "${!SCALES[@]}"; do
21
+ scale="${SCALES[$i]}"
22
+ gpu="${GPUS[$i]}"
23
+ log="${LOG_DIR}/${scale}.log"
24
+
25
+ echo "[GPU $gpu] $scale -> $log"
26
+ CUDA_VISIBLE_DEVICES=$gpu $PYTHON $SCRIPT \
27
+ --model_type $MODEL \
28
+ --scales $scale \
29
+ --device cuda \
30
+ --no-auto-roborefer \
31
+ > "$log" 2>&1 &
32
+ PIDS+=($!)
33
+ done
34
+
35
+ echo ""
36
+ echo "Waiting for all ${#PIDS[@]} processes..."
37
+ FAILED=0
38
+ for i in "${!PIDS[@]}"; do
39
+ pid="${PIDS[$i]}"
40
+ scale="${SCALES[$i]}"
41
+ if wait $pid; then
42
+ echo "[DONE] $scale (PID $pid) - SUCCESS"
43
+ else
44
+ echo "[FAIL] $scale (PID $pid) - EXIT CODE $?"
45
+ FAILED=$((FAILED + 1))
46
+ fi
47
+ done
48
+
49
+ if [ $FAILED -gt 0 ]; then
50
+ echo "WARNING: $FAILED scale(s) failed. Check logs in $LOG_DIR"
51
+ fi
52
+
53
+ echo "========================================="
54
+ echo " NVILA Correct Filter: Merge 1/2 (without roborefer)"
55
+ echo "========================================="
56
+ $PYTHON $SCRIPT --model_type $MODEL --merge \
57
+ --scales vanilla 80k 400k 800k 2m \
58
+ 2>&1 | tee "${LOG_DIR}/merge.log"
59
+
60
+ echo "========================================="
61
+ echo " NVILA Correct Filter: Merge 2/2 (with roborefer)"
62
+ echo "========================================="
63
+ $PYTHON $SCRIPT --model_type $MODEL --merge \
64
+ --scales vanilla 80k 400k 800k 2m roborefer \
65
+ --merge-output-dir "${RESULTS_BASE}/nvila_with_roborefer" \
66
+ 2>&1 | tee "${LOG_DIR}/merge_with_roborefer.log"
67
+
68
+ echo "ALL DONE: $MODEL"
69
+ echo "Results (no roborefer): ${RESULTS_BASE}/nvila/"
70
+ echo "Results (with roborefer): ${RESULTS_BASE}/nvila_with_roborefer/"
correct_filter/run_qwen.sh ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ SCRIPT="/data/shared/Qwen/experiments/correct_filter/correct_filter_analysis.py"
5
+ PYTHON="/usr/bin/python3"
6
+ MODEL="qwen"
7
+ LOG_DIR="/data/shared/Qwen/experiments/correct_filter/logs/${MODEL}"
8
+ mkdir -p "$LOG_DIR"
9
+
10
+ # GPU plan: Qwen(10GB) on GPU 5-7, sharing with NVILA roborefer on GPU 5
11
+ # GPU 6,7 each host 2 Qwen scales (20GB each, well within 80GB)
12
+ SCALES=("vanilla" "80k" "400k" "800k" "2m")
13
+ GPUS=(5 6 6 7 7)
14
+
15
+ echo "========================================="
16
+ echo " Qwen Correct Filter: Launching ${#SCALES[@]} scales in parallel"
17
+ echo "========================================="
18
+
19
+ PIDS=()
20
+ for i in "${!SCALES[@]}"; do
21
+ scale="${SCALES[$i]}"
22
+ gpu="${GPUS[$i]}"
23
+ log="${LOG_DIR}/${scale}.log"
24
+
25
+ echo "[GPU $gpu] $scale -> $log"
26
+ CUDA_VISIBLE_DEVICES=$gpu $PYTHON $SCRIPT \
27
+ --model_type $MODEL \
28
+ --scales $scale \
29
+ --device cuda \
30
+ --no-auto-roborefer \
31
+ > "$log" 2>&1 &
32
+ PIDS+=($!)
33
+ done
34
+
35
+ echo ""
36
+ echo "Waiting for all ${#PIDS[@]} processes..."
37
+ FAILED=0
38
+ for i in "${!PIDS[@]}"; do
39
+ pid="${PIDS[$i]}"
40
+ scale="${SCALES[$i]}"
41
+ if wait $pid; then
42
+ echo "[DONE] $scale (PID $pid) - SUCCESS"
43
+ else
44
+ echo "[FAIL] $scale (PID $pid) - EXIT CODE $?"
45
+ FAILED=$((FAILED + 1))
46
+ fi
47
+ done
48
+
49
+ if [ $FAILED -gt 0 ]; then
50
+ echo "WARNING: $FAILED scale(s) failed. Check logs in $LOG_DIR"
51
+ fi
52
+
53
+ echo "========================================="
54
+ echo " Qwen Correct Filter: Running merge"
55
+ echo "========================================="
56
+ $PYTHON $SCRIPT --model_type $MODEL --merge --scales vanilla 80k 400k 800k 2m \
57
+ 2>&1 | tee "${LOG_DIR}/merge.log"
58
+
59
+ echo "ALL DONE: $MODEL"
exp2a_correct_filter/exp2a_correct_filter_analysis.py ADDED
@@ -0,0 +1,1825 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Experiment 2-A (Correct Filter): Correctness-Filtered Representation Analysis
3
+
4
+ Extends exp2a_modified by:
5
+ - Generating model predictions to determine correctness
6
+ - Filtering samples into correct/incorrect groups with balanced sampling
7
+ - Running similarity analysis on each group separately
8
+ - Recording per-scale, per-category accuracy
9
+ - Comparing correct-only vs incorrect-only vs all to check whether
10
+ scaling effects on similarity are genuine or just accuracy-driven
11
+
12
+ Balanced sampling: within each group (correct/incorrect), all 6 categories
13
+ have the same number of samples, rounded down to the nearest multiple of 50.
14
+ """
15
+
16
+ import os
17
+ import sys
18
+ import json
19
+ import argparse
20
+ import base64
21
+ import logging
22
+ import random
23
+ import re
24
+ from io import BytesIO
25
+ from collections import defaultdict
26
+ from typing import Dict, List, Tuple, Optional, Any
27
+ from abc import ABC, abstractmethod
28
+
29
+ import torch
30
+ import numpy as np
31
+ import pandas as pd
32
+ from PIL import Image
33
+ from tqdm import tqdm
34
+ import matplotlib.pyplot as plt
35
+ import seaborn as sns
36
+ from sklearn.metrics.pairwise import cosine_similarity
37
+
38
+ # Setup logging
39
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
40
+ logger = logging.getLogger(__name__)
41
+
42
+ # Category order for output
43
+ CATEGORY_ORDER = ['left', 'right', 'above', 'under', 'far', 'close']
44
+
45
+ # Opposite map for answer matching
46
+ OPPOSITE_MAP = {
47
+ 'left': 'right', 'right': 'left',
48
+ 'above': 'under', 'under': 'above',
49
+ 'far': 'close', 'close': 'far',
50
+ }
51
+
52
+ # Pair definitions for trajectory analysis
53
+ TRAJECTORY_PAIRS = {
54
+ 'hypothesis': [
55
+ ('above', 'far', 'above-far', '#d62728'), # red
56
+ ('under', 'close', 'under-close', '#1f77b4'), # blue
57
+ ],
58
+ 'within_axis': [
59
+ ('left', 'right', 'left-right', '#2ca02c'), # green
60
+ ('above', 'under', 'above-under', '#ff7f0e'), # orange
61
+ ('far', 'close', 'far-close', '#9467bd'), # purple
62
+ ],
63
+ 'counter_hypothesis': [
64
+ ('above', 'close', 'above-close', '#e377c2'), # pink
65
+ ('under', 'far', 'under-far', '#17becf'), # cyan
66
+ ],
67
+ }
68
+
69
+ # Scale colors for cross-scale plots
70
+ SCALE_COLORS = {
71
+ 'vanilla': '#1f77b4',
72
+ '80k': '#ff7f0e',
73
+ '400k': '#2ca02c',
74
+ '800k': '#d62728',
75
+ '2m': '#9467bd',
76
+ 'roborefer': '#8c564b',
77
+ }
78
+
79
+
80
+ # ============================================================================
81
+ # Data Loading & Modification (same as exp2a_modified)
82
+ # ============================================================================
83
+
84
+ OBJECT_PATTERNS = [
85
+ re.compile(r'between\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
86
+ re.compile(r'of\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
87
+ re.compile(r'positions\s+of\s+(.+?)\s+and\s+(.+?)\s+interact', re.IGNORECASE),
88
+ re.compile(r'How\s+are\s+(.+?)\s+and\s+(.+?)\s+positioned', re.IGNORECASE),
89
+ re.compile(r'arrangement\s+of\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
90
+ ]
91
+
92
+
93
+ def extract_objects(question: str) -> Tuple[str, str]:
94
+ for pattern in OBJECT_PATTERNS:
95
+ m = pattern.search(question)
96
+ if m:
97
+ return m.group(1).strip(), m.group(2).strip()
98
+ raise ValueError(f"Could not extract objects from: {question}")
99
+
100
+
101
+ def modify_pairwise_sample(sample: dict) -> dict:
102
+ obj1, obj2 = extract_objects(sample['question'])
103
+ category = sample['category']
104
+
105
+ if category in ['left', 'right']:
106
+ new_question = f"Is the {obj1} to the left or right of the {obj2}?"
107
+ else: # above, under
108
+ new_question = f"Is the {obj1} above or under the {obj2}?"
109
+
110
+ return {
111
+ 'index': sample['index'],
112
+ 'image_base64': sample['image_base64'],
113
+ 'question': new_question,
114
+ 'answer': category,
115
+ 'category': category,
116
+ }
117
+
118
+
119
+ def modify_distance_sample(sample: dict, rng: random.Random) -> dict:
120
+ category = sample['category']
121
+ answer_key = sample['answer']
122
+ options = sample['options']
123
+
124
+ target_object = options[answer_key]
125
+ candidates = [v for k, v in options.items() if k != answer_key]
126
+ reference_object = rng.choice(candidates)
127
+
128
+ new_question = f"Compared to {reference_object}, is {target_object} far or close from you?"
129
+
130
+ return {
131
+ 'index': sample['index'],
132
+ 'image_base64': sample['image_base64'],
133
+ 'question': new_question,
134
+ 'answer': category,
135
+ 'category': category,
136
+ }
137
+
138
+
139
+ def load_and_modify_data(
140
+ tsv_path: str,
141
+ seed: int = 42
142
+ ) -> Dict[str, List[dict]]:
143
+ """Load ALL samples (no per-category limit) to maximize data for correct/incorrect filtering."""
144
+ rng = random.Random(seed)
145
+ np.random.seed(seed)
146
+
147
+ df = pd.read_csv(tsv_path, sep='\t')
148
+
149
+ raw_grouped = defaultdict(list)
150
+ for _, row in df.iterrows():
151
+ category = row['category']
152
+ sample = {
153
+ 'index': row['index'],
154
+ 'image_base64': row['image'],
155
+ 'question': row['question'],
156
+ 'answer': row['answer'],
157
+ 'category': category,
158
+ 'options': {
159
+ 'A': row['A'],
160
+ 'B': row['B'],
161
+ 'C': row['C'],
162
+ 'D': row['D']
163
+ }
164
+ }
165
+ raw_grouped[category].append(sample)
166
+
167
+ modified_data = defaultdict(list)
168
+ stats = {'total': 0, 'success': 0, 'failed': 0}
169
+
170
+ for category in CATEGORY_ORDER:
171
+ samples = raw_grouped[category]
172
+
173
+ for sample in samples:
174
+ stats['total'] += 1
175
+ try:
176
+ if category in ['left', 'right', 'above', 'under']:
177
+ modified = modify_pairwise_sample(sample)
178
+ else:
179
+ modified = modify_distance_sample(sample, rng)
180
+
181
+ assert modified['answer'] == modified['category']
182
+ modified_data[category].append(modified)
183
+ stats['success'] += 1
184
+ except Exception as e:
185
+ stats['failed'] += 1
186
+ logger.warning(f" Failed to modify sample {sample['index']}: {e}")
187
+
188
+ logger.info(f"Data modification: {stats['success']}/{stats['total']} success, {stats['failed']} failed")
189
+ for cat in CATEGORY_ORDER:
190
+ if cat in modified_data:
191
+ logger.info(f" {cat}: {len(modified_data[cat])} samples")
192
+ ex = modified_data[cat][0]
193
+ logger.info(f" Example Q: {ex['question']}")
194
+ logger.info(f" Example A: {ex['answer']}")
195
+
196
+ return dict(modified_data)
197
+
198
+
199
+ def decode_base64_image(base64_str: str) -> Image.Image:
200
+ image_data = base64.b64decode(base64_str)
201
+ return Image.open(BytesIO(image_data)).convert('RGB')
202
+
203
+
204
+ # ============================================================================
205
+ # Answer Matching
206
+ # ============================================================================
207
+
208
+ def check_answer(generated_text: str, expected_category: str) -> bool:
209
+ """Check if model's generated text matches the expected category.
210
+
211
+ Finds which of the two options (expected vs opposite) appears first.
212
+ """
213
+ if not generated_text or not generated_text.strip():
214
+ return False
215
+
216
+ text = generated_text.strip().lower()
217
+ expected = expected_category.lower()
218
+ opposite = OPPOSITE_MAP[expected]
219
+
220
+ pos_exp = text.find(expected)
221
+ pos_opp = text.find(opposite)
222
+
223
+ if pos_exp == -1:
224
+ return False
225
+ if pos_opp == -1:
226
+ return True
227
+ return pos_exp < pos_opp
228
+
229
+
230
+ # ============================================================================
231
+ # Base Extractor (modified: prefill-only hooks + extract_and_predict)
232
+ # ============================================================================
233
+
234
+ class BaseHiddenStateExtractor(ABC):
235
+ """Base class for extracting hidden states from VLMs."""
236
+
237
+ def __init__(self, model_path: str, device: str = 'cuda', target_layers: List[int] = None):
238
+ self.model_path = model_path
239
+ self.device = device
240
+ self.hidden_states = {}
241
+ self.hooks = []
242
+
243
+ self._load_model()
244
+
245
+ num_layers = self._get_num_layers()
246
+ if target_layers is None:
247
+ self.target_layers = list(range(num_layers))
248
+ logger.info(f"Model has {num_layers} layers. Extracting ALL layers (0..{num_layers-1})")
249
+ else:
250
+ self.target_layers = target_layers
251
+ logger.info(f"Model has {num_layers} layers. Target layers: {self.target_layers}")
252
+
253
+ self._register_hooks()
254
+
255
+ def _register_hooks(self):
256
+ for layer_idx in self.target_layers:
257
+ module = self._get_layer_module(layer_idx)
258
+ if module is not None:
259
+ hook = module.register_forward_hook(self._make_hook(layer_idx))
260
+ self.hooks.append(hook)
261
+ logger.info(f" Registered hook on layer {layer_idx}")
262
+
263
+ def _make_hook(self, layer_idx: int):
264
+ """Create a hook that only captures during prefill (seq_len > 1)."""
265
+ def hook_fn(module, input, output):
266
+ if isinstance(output, tuple):
267
+ hidden = output[0]
268
+ else:
269
+ hidden = output
270
+
271
+ # Only capture during prefill pass (seq_len > 1).
272
+ # During autoregressive generation, each step has seq_len = 1.
273
+ if hidden.shape[1] > 1:
274
+ last_token = hidden[:, -1, :].detach().cpu().float()
275
+ self.hidden_states[layer_idx] = last_token.squeeze(0)
276
+
277
+ return hook_fn
278
+
279
+ @abstractmethod
280
+ def _load_model(self):
281
+ pass
282
+
283
+ @abstractmethod
284
+ def _get_num_layers(self) -> int:
285
+ pass
286
+
287
+ @abstractmethod
288
+ def _get_layer_module(self, layer_idx: int):
289
+ pass
290
+
291
+ @abstractmethod
292
+ def extract_and_predict(self, image: Image.Image, question: str) -> Tuple[Dict[int, torch.Tensor], str]:
293
+ """Extract hidden states AND generate predicted answer in one pass.
294
+
295
+ Returns:
296
+ (hidden_states, predicted_answer_text)
297
+ """
298
+ pass
299
+
300
+ def cleanup(self):
301
+ for hook in self.hooks:
302
+ hook.remove()
303
+ self.hooks = []
304
+ if hasattr(self, 'model'):
305
+ del self.model
306
+ if hasattr(self, 'processor'):
307
+ del self.processor
308
+ torch.cuda.empty_cache()
309
+
310
+
311
+ # ============================================================================
312
+ # Molmo Extractor
313
+ # ============================================================================
314
+
315
+ class MolmoExtractor(BaseHiddenStateExtractor):
316
+
317
+ def _load_model(self):
318
+ config_path = os.path.join(self.model_path, "config.yaml")
319
+ checkpoint_path = os.path.join(self.model_path, "model.pt")
320
+
321
+ if os.path.exists(config_path) and os.path.exists(checkpoint_path):
322
+ self._load_native_model()
323
+ self.is_native = True
324
+ else:
325
+ self._load_hf_model()
326
+ self.is_native = False
327
+
328
+ def _load_native_model(self):
329
+ from olmo.config import ModelConfig
330
+ from olmo.model import Molmo as NativeMolmoModel
331
+ from olmo.data.model_preprocessor import MultiModalPreprocessor
332
+ from olmo.data.data_formatter import DataFormatter
333
+
334
+ _original_load = torch.load
335
+ def _unsafe_load_wrapper(*args, **kwargs):
336
+ if 'weights_only' not in kwargs:
337
+ kwargs['weights_only'] = False
338
+ return _original_load(*args, **kwargs)
339
+ torch.load = _unsafe_load_wrapper
340
+
341
+ config_path = os.path.join(self.model_path, "config.yaml")
342
+ checkpoint_path = os.path.join(self.model_path, "model.pt")
343
+
344
+ cfg = ModelConfig.load(config_path, key="model", validate_paths=False)
345
+ cfg.init_device = "cpu"
346
+
347
+ self.model = NativeMolmoModel(cfg)
348
+ state_dict = torch.load(checkpoint_path, map_location="cpu")
349
+ self.model.load_state_dict(state_dict)
350
+ self.model = self.model.to(self.device, dtype=torch.bfloat16).eval()
351
+
352
+ self.tokenizer = cfg.get_tokenizer()
353
+ v_cfg = cfg.vision_backbone
354
+ h, w = cfg.llm_patches_per_crop()
355
+ image_padding_mask = 2 if cfg.fix_image_padding else (1 if cfg.image_padding_embed else None)
356
+
357
+ class SafeDataFormatter(DataFormatter):
358
+ def get_system_prompt(self, style, for_inference, messages, rng=None):
359
+ if style is None:
360
+ style = "User"
361
+ return super().get_system_prompt(style, for_inference, messages, rng)
362
+
363
+ self.formatter = SafeDataFormatter(
364
+ prompt_templates=cfg.prompt_type,
365
+ message_format=cfg.message_formatting,
366
+ system_prompt=cfg.system_prompt_kind,
367
+ always_start_with_space=cfg.always_start_with_space,
368
+ default_inference_len=cfg.default_inference_len
369
+ )
370
+
371
+ self.preprocessor = MultiModalPreprocessor(
372
+ tokenizer=self.tokenizer,
373
+ normalize=str(v_cfg.image_model_type),
374
+ crop_mode=cfg.crop_mode,
375
+ max_crops=cfg.max_crops,
376
+ overlap_margins=cfg.overlap_margins,
377
+ resize=v_cfg.resize_mode,
378
+ use_col_tokens=cfg.use_col_tokens,
379
+ base_image_input_size=v_cfg.image_default_input_size,
380
+ image_pooling_w=cfg.image_pooling_w,
381
+ image_pooling_h=cfg.image_pooling_h,
382
+ image_token_length_w=w,
383
+ image_token_length_h=h,
384
+ image_patch_size=v_cfg.image_patch_size,
385
+ image_padding_mask=image_padding_mask,
386
+ pad_value=cfg.pad_value,
387
+ loss_token_weighting=cfg.multi_annotation_weighting,
388
+ )
389
+
390
+ logger.info(f"Loaded native Molmo model from {self.model_path}")
391
+
392
+ def _load_hf_model(self):
393
+ from transformers import AutoModelForCausalLM, AutoProcessor
394
+
395
+ self.model = AutoModelForCausalLM.from_pretrained(
396
+ self.model_path,
397
+ torch_dtype=torch.bfloat16,
398
+ trust_remote_code=True,
399
+ device_map=self.device
400
+ )
401
+ self.model.eval()
402
+
403
+ self.processor = AutoProcessor.from_pretrained(
404
+ self.model_path,
405
+ trust_remote_code=True
406
+ )
407
+ logger.info(f"Loaded HuggingFace Molmo model from {self.model_path}")
408
+
409
+ def _get_num_layers(self) -> int:
410
+ if self.is_native:
411
+ return len(self.model.transformer.blocks)
412
+ else:
413
+ if hasattr(self.model, 'model') and hasattr(self.model.model, 'transformer'):
414
+ return len(self.model.model.transformer.blocks)
415
+ return 32
416
+
417
+ def _get_layer_module(self, layer_idx: int):
418
+ if self.is_native:
419
+ return self.model.transformer.blocks[layer_idx]
420
+ else:
421
+ return self.model.model.transformer.blocks[layer_idx]
422
+
423
+ def extract_and_predict(self, image: Image.Image, question: str) -> Tuple[Dict[int, torch.Tensor], str]:
424
+ self.hidden_states = {}
425
+
426
+ if self.is_native:
427
+ example = {"messages": [question], "image": image}
428
+ messages, _ = self.formatter(example, is_training=False, for_inference=True, rng=np.random)
429
+ image_np = np.array(image)
430
+ batch = self.preprocessor(image_np, messages, is_training=False, require_image_features=True)
431
+
432
+ if 'input_ids' not in batch and 'input_tokens' in batch:
433
+ batch['input_ids'] = batch['input_tokens']
434
+
435
+ def to_tensor(x):
436
+ if isinstance(x, np.ndarray):
437
+ return torch.from_numpy(x)
438
+ return x
439
+
440
+ input_ids = to_tensor(batch['input_ids']).unsqueeze(0).to(self.device)
441
+ if input_ids.dtype not in [torch.long, torch.int64]:
442
+ input_ids = input_ids.long()
443
+
444
+ images_tensor = to_tensor(batch['images']).unsqueeze(0).to(self.device).to(dtype=torch.bfloat16)
445
+ image_masks = to_tensor(batch['image_masks']).unsqueeze(0).to(self.device).to(dtype=torch.bfloat16)
446
+ image_input_idx = to_tensor(batch['image_input_idx']).unsqueeze(0).to(self.device)
447
+
448
+ with torch.inference_mode():
449
+ with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
450
+ gen_output = self.model.generate(
451
+ input_ids=input_ids,
452
+ images=images_tensor,
453
+ image_masks=image_masks,
454
+ image_input_idx=image_input_idx,
455
+ max_steps=20,
456
+ beam_size=1,
457
+ )
458
+
459
+ # gen_output.token_ids shape: (batch, beam, max_steps)
460
+ generated_ids = gen_output.token_ids[0, 0] # first batch, first beam
461
+ answer = self.tokenizer.decode(generated_ids.tolist()).strip()
462
+ # Remove EOS tokens
463
+ for eos in ['<|endoftext|>', '</s>', '<|end|>']:
464
+ answer = answer.replace(eos, '').strip()
465
+
466
+ else:
467
+ from transformers import GenerationConfig
468
+
469
+ inputs = self.processor.process(images=[image], text=question)
470
+ processed_inputs = {}
471
+ for k, v in inputs.items():
472
+ v = v.to(self.device).unsqueeze(0)
473
+ if v.dtype == torch.float32:
474
+ v = v.to(dtype=torch.bfloat16)
475
+ processed_inputs[k] = v
476
+
477
+ with torch.no_grad():
478
+ with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
479
+ output = self.model.generate_from_batch(
480
+ processed_inputs,
481
+ GenerationConfig(max_new_tokens=20, stop_strings="<|endoftext|>"),
482
+ tokenizer=self.processor.tokenizer,
483
+ )
484
+
485
+ input_len = processed_inputs['input_ids'].shape[1]
486
+ generated_tokens = output[0, input_len:]
487
+ answer = self.processor.tokenizer.decode(generated_tokens, skip_special_tokens=True).strip()
488
+
489
+ return self.hidden_states.copy(), answer
490
+
491
+
492
+ # ============================================================================
493
+ # NVILA Extractor
494
+ # ============================================================================
495
+
496
+ class NVILAExtractor(BaseHiddenStateExtractor):
497
+
498
+ def _load_model(self):
499
+ original_sys_path = sys.path.copy()
500
+ sys.path = [p for p in sys.path if 'RoboRefer' not in p]
501
+
502
+ modules_to_remove = [key for key in list(sys.modules.keys()) if 'llava' in key.lower()]
503
+ removed_modules = {}
504
+ for mod in modules_to_remove:
505
+ removed_modules[mod] = sys.modules.pop(mod)
506
+
507
+ try:
508
+ import llava
509
+ from llava.media import Image as LLaVAImage
510
+ from llava import conversation as clib
511
+ except Exception as err:
512
+ sys.path = original_sys_path
513
+ for mod, module in removed_modules.items():
514
+ sys.modules[mod] = module
515
+ raise RuntimeError(f"Failed to import llava: {err}")
516
+
517
+ sys.path = original_sys_path
518
+
519
+ self.LLaVAImage = LLaVAImage
520
+ self.clib = clib
521
+
522
+ self.model = llava.load(self.model_path, model_base=None)
523
+
524
+ self._find_llm_backbone()
525
+
526
+ logger.info(f"Loaded NVILA model from {self.model_path}")
527
+
528
+ def _find_llm_backbone(self):
529
+ candidates = []
530
+
531
+ if hasattr(self.model, 'llm'):
532
+ if hasattr(self.model.llm, 'model') and hasattr(self.model.llm.model, 'layers'):
533
+ candidates.append(('model.llm.model.layers', self.model.llm.model.layers))
534
+ if hasattr(self.model.llm, 'layers'):
535
+ candidates.append(('model.llm.layers', self.model.llm.layers))
536
+
537
+ if hasattr(self.model, 'model'):
538
+ if hasattr(self.model.model, 'model') and hasattr(self.model.model.model, 'layers'):
539
+ candidates.append(('model.model.model.layers', self.model.model.model.layers))
540
+ if hasattr(self.model.model, 'layers'):
541
+ candidates.append(('model.model.layers', self.model.model.layers))
542
+
543
+ for name, module in self.model.named_modules():
544
+ if name.endswith('.layers') and hasattr(module, '__len__') and len(module) > 0:
545
+ candidates.append((name, module))
546
+
547
+ if candidates:
548
+ path, layers = candidates[0]
549
+ logger.info(f"Found LLM layers at: {path} (num_layers={len(layers)})")
550
+ self.llm_backbone = layers
551
+ self.layers_path = path
552
+ else:
553
+ logger.error("Could not find transformer layers in model!")
554
+ for name, _ in list(self.model.named_modules())[:20]:
555
+ logger.info(f" {name}")
556
+ raise ValueError("Could not locate transformer layers in NVILA model")
557
+
558
+ def _get_num_layers(self) -> int:
559
+ if hasattr(self, 'llm_backbone') and hasattr(self.llm_backbone, '__len__'):
560
+ return len(self.llm_backbone)
561
+ return 24
562
+
563
+ def _get_layer_module(self, layer_idx: int):
564
+ if hasattr(self, 'llm_backbone') and hasattr(self.llm_backbone, '__getitem__'):
565
+ module = self.llm_backbone[layer_idx]
566
+ logger.info(f" Accessing layer {layer_idx}: {type(module).__name__}")
567
+ return module
568
+ logger.error(f"Cannot access layer {layer_idx} - llm_backbone not properly initialized")
569
+ return None
570
+
571
+ def extract_and_predict(self, image: Image.Image, question: str) -> Tuple[Dict[int, torch.Tensor], str]:
572
+ self.hidden_states = {}
573
+
574
+ import tempfile
575
+ with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as f:
576
+ temp_path = f.name
577
+ image.save(temp_path)
578
+
579
+ try:
580
+ prompt = [self.LLaVAImage(temp_path), question]
581
+
582
+ from transformers import GenerationConfig
583
+ gen_config = GenerationConfig(max_new_tokens=20, do_sample=False)
584
+ response = self.model.generate_content(prompt, generation_config=gen_config)
585
+ finally:
586
+ os.unlink(temp_path)
587
+
588
+ if isinstance(response, list):
589
+ response = response[0]
590
+ answer = str(response).strip()
591
+
592
+ return self.hidden_states.copy(), answer
593
+
594
+
595
+ # ============================================================================
596
+ # RoboRefer Extractor (NVILA-based)
597
+ # ============================================================================
598
+
599
+ class RoboReferExtractor(NVILAExtractor):
600
+
601
+ ROBOREFER_PATH = '/data/shared/Qwen/RoboRefer'
602
+
603
+ def _load_model(self):
604
+ original_sys_path = sys.path.copy()
605
+
606
+ if self.ROBOREFER_PATH not in sys.path:
607
+ sys.path.insert(0, self.ROBOREFER_PATH)
608
+
609
+ modules_to_remove = [key for key in list(sys.modules.keys()) if 'llava' in key.lower()]
610
+ removed_modules = {}
611
+ for mod in modules_to_remove:
612
+ removed_modules[mod] = sys.modules.pop(mod)
613
+
614
+ try:
615
+ import llava
616
+ from llava.media import Image as LLaVAImage
617
+ from llava import conversation as clib
618
+ except Exception as err:
619
+ sys.path = original_sys_path
620
+ for mod, module in removed_modules.items():
621
+ sys.modules[mod] = module
622
+ raise RuntimeError(f"Failed to import RoboRefer llava: {err}")
623
+
624
+ sys.path = original_sys_path
625
+
626
+ self.LLaVAImage = LLaVAImage
627
+ self.clib = clib
628
+
629
+ self.model = llava.load(self.model_path, model_base=None)
630
+
631
+ self._find_llm_backbone()
632
+
633
+ logger.info(f"Loaded RoboRefer model from {self.model_path}")
634
+
635
+
636
+ # ============================================================================
637
+ # Qwen2.5-VL Extractor
638
+ # ============================================================================
639
+
640
+ class Qwen25VLExtractor(BaseHiddenStateExtractor):
641
+
642
+ BASE_MODEL = "Qwen/Qwen2.5-VL-3B-Instruct"
643
+
644
+ def _load_model(self):
645
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
646
+
647
+ try:
648
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
649
+ self.model_path,
650
+ torch_dtype=torch.bfloat16,
651
+ device_map=self.device
652
+ )
653
+ except ImportError:
654
+ logger.info("accelerate not available, loading model without device_map...")
655
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
656
+ self.model_path,
657
+ torch_dtype=torch.bfloat16,
658
+ )
659
+ self.model = self.model.to(self.device)
660
+
661
+ self.model.eval()
662
+
663
+ if self.model_path.startswith('/'):
664
+ logger.info(f"Fine-tuned model detected, loading processor from base model: {self.BASE_MODEL}")
665
+ self.processor = AutoProcessor.from_pretrained(self.BASE_MODEL)
666
+ else:
667
+ self.processor = AutoProcessor.from_pretrained(self.model_path)
668
+ logger.info(f"Loaded Qwen2.5-VL model from {self.model_path}")
669
+
670
+ def _get_num_layers(self) -> int:
671
+ return len(self.model.model.layers)
672
+
673
+ def _get_layer_module(self, layer_idx: int):
674
+ return self.model.model.layers[layer_idx]
675
+
676
+ def extract_and_predict(self, image: Image.Image, question: str) -> Tuple[Dict[int, torch.Tensor], str]:
677
+ self.hidden_states = {}
678
+
679
+ messages = [
680
+ {
681
+ "role": "user",
682
+ "content": [
683
+ {"type": "image", "image": image},
684
+ {"type": "text", "text": question}
685
+ ]
686
+ }
687
+ ]
688
+
689
+ text = self.processor.apply_chat_template(
690
+ messages, tokenize=False, add_generation_prompt=True
691
+ )
692
+
693
+ from qwen_vl_utils import process_vision_info
694
+ image_inputs, video_inputs = process_vision_info(messages)
695
+
696
+ inputs = self.processor(
697
+ text=[text],
698
+ images=image_inputs,
699
+ videos=video_inputs,
700
+ padding=True,
701
+ return_tensors="pt"
702
+ )
703
+ inputs = inputs.to(self.device)
704
+
705
+ with torch.no_grad():
706
+ output_ids = self.model.generate(
707
+ **inputs,
708
+ max_new_tokens=20,
709
+ do_sample=False,
710
+ )
711
+
712
+ input_len = inputs['input_ids'].shape[1]
713
+ generated_ids = output_ids[0, input_len:]
714
+ answer = self.processor.tokenizer.decode(generated_ids, skip_special_tokens=True).strip()
715
+
716
+ return self.hidden_states.copy(), answer
717
+
718
+
719
+ # ============================================================================
720
+ # Factory Function
721
+ # ============================================================================
722
+
723
+ def get_extractor(model_type: str, model_path: str, scale: str = None, **kwargs) -> BaseHiddenStateExtractor:
724
+ if model_type == 'nvila' and scale == 'roborefer':
725
+ return RoboReferExtractor(model_path, **kwargs)
726
+
727
+ extractors = {
728
+ 'molmo': MolmoExtractor,
729
+ 'nvila': NVILAExtractor,
730
+ 'qwen': Qwen25VLExtractor,
731
+ }
732
+ if model_type not in extractors:
733
+ raise ValueError(f"Unknown model type: {model_type}. Available: {list(extractors.keys())}")
734
+ return extractors[model_type](model_path, **kwargs)
735
+
736
+
737
+ # ============================================================================
738
+ # Extraction with Per-Sample Recording
739
+ # ============================================================================
740
+
741
+ def extract_all_with_predictions(
742
+ extractor: BaseHiddenStateExtractor,
743
+ data: Dict[str, List[dict]],
744
+ ) -> Dict[str, List[dict]]:
745
+ """Extract hidden states and predictions for all samples.
746
+
747
+ Returns:
748
+ sample_records: {category -> [{hidden_states: {layer: vec}, is_correct: bool, predicted: str, index: int}]}
749
+ """
750
+ sample_records = defaultdict(list)
751
+
752
+ for category in CATEGORY_ORDER:
753
+ if category not in data:
754
+ continue
755
+ samples = data[category]
756
+ logger.info(f"Processing category: {category} ({len(samples)} samples)")
757
+ success_count = 0
758
+
759
+ for sample in tqdm(samples, desc=f" {category}"):
760
+ try:
761
+ image = decode_base64_image(sample['image_base64'])
762
+ hidden_states, predicted = extractor.extract_and_predict(image, sample['question'])
763
+
764
+ is_correct = check_answer(predicted, category)
765
+ mark = "O" if is_correct else "X"
766
+ tqdm.write(f" [{mark}] #{sample['index']:<6} expected={category:<8} | predicted=\"{predicted[:80]}\"")
767
+
768
+ record = {
769
+ 'hidden_states': {},
770
+ 'is_correct': is_correct,
771
+ 'predicted': predicted,
772
+ 'index': sample['index'],
773
+ }
774
+
775
+ for layer_idx in extractor.target_layers:
776
+ if layer_idx in hidden_states:
777
+ state = hidden_states[layer_idx].numpy().flatten()
778
+ if state.size > 0:
779
+ record['hidden_states'][layer_idx] = state
780
+
781
+ if record['hidden_states']:
782
+ sample_records[category].append(record)
783
+ success_count += 1
784
+ else:
785
+ logger.warning(f" No hidden states for sample {sample['index']}")
786
+ except Exception as e:
787
+ logger.warning(f" Error processing sample {sample['index']}: {e}")
788
+ continue
789
+
790
+ correct_n = sum(1 for r in sample_records[category] if r['is_correct'])
791
+ incorrect_n = sum(1 for r in sample_records[category] if not r['is_correct'])
792
+ acc = correct_n / (correct_n + incorrect_n) * 100 if (correct_n + incorrect_n) > 0 else 0
793
+ logger.info(f" {category}: {success_count}/{len(samples)} extracted | "
794
+ f"correct={correct_n}, incorrect={incorrect_n}, accuracy={acc:.1f}%")
795
+
796
+ # Log overall accuracy summary
797
+ total_correct = sum(1 for cat in sample_records for r in sample_records[cat] if r['is_correct'])
798
+ total_all = sum(len(sample_records[cat]) for cat in sample_records)
799
+ overall_acc = total_correct / total_all * 100 if total_all > 0 else 0
800
+ logger.info(f"\n === Category Accuracy Summary ===")
801
+ for cat in CATEGORY_ORDER:
802
+ if cat in sample_records:
803
+ c = sum(1 for r in sample_records[cat] if r['is_correct'])
804
+ n = len(sample_records[cat])
805
+ a = c / n * 100 if n > 0 else 0
806
+ logger.info(f" {cat:>6s}: {c:>4d}/{n:<4d} = {a:5.1f}%")
807
+ logger.info(f" {'TOTAL':>6s}: {total_correct:>4d}/{total_all:<4d} = {overall_acc:5.1f}%")
808
+ logger.info(f" ================================\n")
809
+
810
+ return dict(sample_records)
811
+
812
+
813
+ # ============================================================================
814
+ # Balanced Sampling
815
+ # ============================================================================
816
+
817
+ def compute_balanced_size(sample_records: Dict[str, List[dict]], filter_correct: bool) -> int:
818
+ """Find balanced sample size for all 6 categories.
819
+
820
+ Rounds down to nearest multiple of 50 when possible.
821
+ If min < 50 but > 0, uses the raw min (no rounding) to avoid skipping.
822
+ """
823
+ counts = []
824
+ for cat in CATEGORY_ORDER:
825
+ if cat not in sample_records:
826
+ return 0
827
+ n = sum(1 for s in sample_records[cat] if s['is_correct'] == filter_correct)
828
+ counts.append(n)
829
+
830
+ min_count = min(counts)
831
+ if min_count == 0:
832
+ return 0
833
+
834
+ balanced = (min_count // 50) * 50
835
+ if balanced == 0:
836
+ # Less than 50 available but still > 0 — use raw min
837
+ balanced = min_count
838
+
839
+ return balanced
840
+
841
+
842
+ def balanced_sample_and_average(
843
+ sample_records: Dict[str, List[dict]],
844
+ filter_correct: bool,
845
+ n_samples: int,
846
+ target_layers: List[int],
847
+ seed: int = 42,
848
+ ) -> Dict[int, Dict[str, np.ndarray]]:
849
+ """Sample n_samples per category from filtered group and compute averages.
850
+
851
+ Returns:
852
+ {layer_idx -> {category -> averaged_vector}}
853
+ """
854
+ rng = random.Random(seed)
855
+
856
+ result = defaultdict(dict)
857
+
858
+ for category in CATEGORY_ORDER:
859
+ filtered = [s for s in sample_records[category] if s['is_correct'] == filter_correct]
860
+
861
+ if len(filtered) < n_samples:
862
+ logger.warning(f" {category}: only {len(filtered)} samples, need {n_samples}")
863
+ continue
864
+
865
+ sampled = rng.sample(filtered, n_samples)
866
+
867
+ for layer_idx in target_layers:
868
+ vectors = []
869
+ for record in sampled:
870
+ if layer_idx in record['hidden_states']:
871
+ vectors.append(record['hidden_states'][layer_idx])
872
+
873
+ if vectors:
874
+ result[layer_idx][category] = np.mean(vectors, axis=0)
875
+
876
+ return dict(result)
877
+
878
+
879
+ # ============================================================================
880
+ # Accuracy
881
+ # ============================================================================
882
+
883
+ def compute_accuracy_stats(
884
+ sample_records: Dict[str, List[dict]],
885
+ scale: str,
886
+ model_type: str,
887
+ ) -> dict:
888
+ """Compute per-category and overall accuracy."""
889
+ stats = {
890
+ 'model': model_type,
891
+ 'scale': scale,
892
+ }
893
+
894
+ total_correct = 0
895
+ total_count = 0
896
+
897
+ for cat in CATEGORY_ORDER:
898
+ records = sample_records.get(cat, [])
899
+ n = len(records)
900
+ correct = sum(1 for r in records if r['is_correct'])
901
+ acc = correct / n if n > 0 else 0.0
902
+
903
+ stats[f'{cat}_total'] = n
904
+ stats[f'{cat}_correct'] = correct
905
+ stats[f'{cat}_accuracy'] = acc
906
+
907
+ total_correct += correct
908
+ total_count += n
909
+
910
+ stats['overall_total'] = total_count
911
+ stats['overall_correct'] = total_correct
912
+ stats['overall_accuracy'] = total_correct / total_count if total_count > 0 else 0.0
913
+
914
+ return stats
915
+
916
+
917
+ def save_per_sample_predictions(
918
+ sample_records: Dict[str, List[dict]],
919
+ scale: str,
920
+ save_path: str,
921
+ ):
922
+ """Save per-sample prediction details to CSV."""
923
+ rows = []
924
+ for cat in CATEGORY_ORDER:
925
+ for record in sample_records.get(cat, []):
926
+ rows.append({
927
+ 'index': record['index'],
928
+ 'category': cat,
929
+ 'scale': scale,
930
+ 'predicted': record['predicted'],
931
+ 'expected': cat,
932
+ 'is_correct': record['is_correct'],
933
+ })
934
+
935
+ df = pd.DataFrame(rows)
936
+ df.to_csv(save_path, index=False)
937
+ logger.info(f"Saved {len(rows)} per-sample predictions to {save_path}")
938
+
939
+
940
+ # ============================================================================
941
+ # Analysis Functions
942
+ # ============================================================================
943
+
944
+ def compute_similarity_matrix(
945
+ representations: Dict[str, np.ndarray]
946
+ ) -> pd.DataFrame:
947
+ available = [c for c in CATEGORY_ORDER if c in representations]
948
+ vectors = np.array([representations[cat] for cat in available])
949
+ sim_matrix = cosine_similarity(vectors)
950
+ return pd.DataFrame(sim_matrix, index=available, columns=available)
951
+
952
+
953
+ def analyze_hypothesis(sim_df: pd.DataFrame, model_name: str) -> dict:
954
+ results = {'model': model_name}
955
+
956
+ pairs_to_check = {
957
+ 'above_far': ('above', 'far'),
958
+ 'under_close': ('under', 'close'),
959
+ 'left_right': ('left', 'right'),
960
+ }
961
+
962
+ for pair_name, (cat1, cat2) in pairs_to_check.items():
963
+ if cat1 in sim_df.index and cat2 in sim_df.columns:
964
+ sim = sim_df.loc[cat1, cat2]
965
+ results[f'sim_{pair_name}'] = sim
966
+ else:
967
+ results[f'sim_{pair_name}'] = None
968
+
969
+ if results.get('sim_above_far') and results.get('sim_left_right'):
970
+ results['diff_above_far_vs_left_right'] = results['sim_above_far'] - results['sim_left_right']
971
+ if results.get('sim_under_close') and results.get('sim_left_right'):
972
+ results['diff_under_close_vs_left_right'] = results['sim_under_close'] - results['sim_left_right']
973
+
974
+ return results
975
+
976
+
977
+ # ============================================================================
978
+ # Visualization
979
+ # ============================================================================
980
+
981
+ def plot_similarity_heatmap(sim_df: pd.DataFrame, title: str, save_path: str):
982
+ plt.figure(figsize=(10, 8))
983
+ available_order = [c for c in CATEGORY_ORDER if c in sim_df.index]
984
+ sim_df_ordered = sim_df.loc[available_order, available_order]
985
+
986
+ sns.heatmap(
987
+ sim_df_ordered, annot=True, fmt='.4f', cmap='RdYlBu_r',
988
+ center=0.5, vmin=0, vmax=1, square=True, linewidths=0.5,
989
+ cbar_kws={'label': 'Cosine Similarity'}
990
+ )
991
+ plt.title(title, fontsize=14, fontweight='bold')
992
+ plt.tight_layout()
993
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
994
+ plt.close()
995
+ logger.info(f"Saved heatmap: {save_path}")
996
+
997
+
998
+ def _extract_pair_trajectory(
999
+ all_layer_sims: Dict[int, pd.DataFrame],
1000
+ cat1: str, cat2: str,
1001
+ ) -> Tuple[List[int], List[float]]:
1002
+ layers = sorted(all_layer_sims.keys())
1003
+ valid_layers = []
1004
+ values = []
1005
+ for l in layers:
1006
+ df = all_layer_sims[l]
1007
+ if cat1 in df.index and cat2 in df.columns:
1008
+ valid_layers.append(l)
1009
+ values.append(df.loc[cat1, cat2])
1010
+ return valid_layers, values
1011
+
1012
+
1013
+ def get_representative_layers(all_layers: List[int], n: int = 5) -> List[int]:
1014
+ if len(all_layers) <= n:
1015
+ return list(all_layers)
1016
+ indices = np.linspace(0, len(all_layers) - 1, n, dtype=int)
1017
+ return [all_layers[i] for i in indices]
1018
+
1019
+
1020
+ def plot_similarity_trajectories(
1021
+ all_layer_sims: Dict[int, pd.DataFrame],
1022
+ title: str,
1023
+ save_path: str,
1024
+ ):
1025
+ fig, axes = plt.subplots(1, 2, figsize=(20, 7))
1026
+
1027
+ ax = axes[0]
1028
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['hypothesis']:
1029
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
1030
+ ax.plot(layers, vals, '-', color=color, label=label, linewidth=2.5, markersize=0)
1031
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['within_axis']:
1032
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
1033
+ ax.plot(layers, vals, '--', color=color, label=label, linewidth=1.8, markersize=0)
1034
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['counter_hypothesis']:
1035
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
1036
+ ax.plot(layers, vals, ':', color=color, label=label, linewidth=1.5, alpha=0.8)
1037
+
1038
+ ax.set_xlabel('Layer Index', fontsize=12)
1039
+ ax.set_ylabel('Cosine Similarity', fontsize=12)
1040
+ ax.set_title(f'{title}\nPairwise Similarity Across Layers', fontsize=13)
1041
+ ax.legend(fontsize=9, loc='best')
1042
+ ax.grid(True, alpha=0.3)
1043
+
1044
+ ax = axes[1]
1045
+ lr_layers, lr_vals = _extract_pair_trajectory(all_layer_sims, 'left', 'right')
1046
+ lr_dict = dict(zip(lr_layers, lr_vals))
1047
+
1048
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['hypothesis']:
1049
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
1050
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
1051
+ ax.plot(layers, diffs, '-', color=color, label=f'{label} - left-right',
1052
+ linewidth=2.5, markersize=0)
1053
+
1054
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['counter_hypothesis']:
1055
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
1056
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
1057
+ ax.plot(layers, diffs, ':', color=color, label=f'{label} - left-right',
1058
+ linewidth=1.5, alpha=0.8)
1059
+
1060
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['within_axis']:
1061
+ if label == 'left-right':
1062
+ continue
1063
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
1064
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
1065
+ ax.plot(layers, diffs, '--', color=color, label=f'{label} - left-right',
1066
+ linewidth=1.5, alpha=0.7)
1067
+
1068
+ ax.axhline(y=0, color='gray', linestyle='-', linewidth=1, alpha=0.5)
1069
+ ax.set_xlabel('Layer Index', fontsize=12)
1070
+ ax.set_ylabel('Similarity Difference (pair - left-right)', fontsize=12)
1071
+ ax.set_title(f'{title}\nRelative to Left-Right Baseline', fontsize=13)
1072
+ ax.legend(fontsize=8, loc='best')
1073
+ ax.grid(True, alpha=0.3)
1074
+
1075
+ plt.tight_layout()
1076
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1077
+ plt.close()
1078
+ logger.info(f"Saved trajectory plot: {save_path}")
1079
+
1080
+
1081
+ def plot_cross_scale_trajectories(
1082
+ cross_scale_data: Dict[str, Dict[int, pd.DataFrame]],
1083
+ model_type: str,
1084
+ save_path: str,
1085
+ ):
1086
+ pairs = [
1087
+ ('above', 'far', 'above-far (hypothesis)'),
1088
+ ('under', 'close', 'under-close (hypothesis)'),
1089
+ ('left', 'right', 'left-right (control)'),
1090
+ ]
1091
+
1092
+ fig, axes = plt.subplots(1, len(pairs), figsize=(7 * len(pairs), 6))
1093
+ if len(pairs) == 1:
1094
+ axes = [axes]
1095
+
1096
+ for idx, (cat1, cat2, label) in enumerate(pairs):
1097
+ ax = axes[idx]
1098
+ for scale in ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']:
1099
+ if scale not in cross_scale_data:
1100
+ continue
1101
+ layer_sims = cross_scale_data[scale]
1102
+ layers, vals = _extract_pair_trajectory(layer_sims, cat1, cat2)
1103
+ color = SCALE_COLORS.get(scale, 'gray')
1104
+ ax.plot(layers, vals, '-', color=color, label=scale, linewidth=2, markersize=0)
1105
+
1106
+ ax.set_xlabel('Layer Index', fontsize=12)
1107
+ ax.set_ylabel('Cosine Similarity', fontsize=12)
1108
+ ax.set_title(label, fontsize=13, fontweight='bold')
1109
+ ax.legend(fontsize=10)
1110
+ ax.grid(True, alpha=0.3)
1111
+
1112
+ fig.suptitle(
1113
+ f'{model_type.upper()} - Similarity Trajectory Across Scales',
1114
+ fontsize=15, fontweight='bold', y=1.02
1115
+ )
1116
+ plt.tight_layout()
1117
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1118
+ plt.close()
1119
+ logger.info(f"Saved cross-scale trajectory: {save_path}")
1120
+
1121
+
1122
+ def plot_similarity_evolution_heatmap(
1123
+ cross_scale_data: Dict[str, Dict[int, pd.DataFrame]],
1124
+ model_type: str,
1125
+ save_path: str,
1126
+ ):
1127
+ pairs = [
1128
+ ('above', 'far', 'above-far'),
1129
+ ('under', 'close', 'under-close'),
1130
+ ('left', 'right', 'left-right'),
1131
+ ('above', 'under', 'above-under'),
1132
+ ('far', 'close', 'far-close'),
1133
+ ]
1134
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
1135
+ available_scales = [s for s in scale_order if s in cross_scale_data]
1136
+
1137
+ first_scale = available_scales[0]
1138
+ all_layers = sorted(cross_scale_data[first_scale].keys())
1139
+
1140
+ fig, axes = plt.subplots(len(pairs), 1, figsize=(max(14, len(all_layers) * 0.5), 3 * len(pairs)))
1141
+ if len(pairs) == 1:
1142
+ axes = [axes]
1143
+
1144
+ for idx, (cat1, cat2, label) in enumerate(pairs):
1145
+ ax = axes[idx]
1146
+ matrix = np.full((len(available_scales), len(all_layers)), np.nan)
1147
+ for si, scale in enumerate(available_scales):
1148
+ layer_sims = cross_scale_data[scale]
1149
+ for li, layer in enumerate(all_layers):
1150
+ if layer in layer_sims:
1151
+ df = layer_sims[layer]
1152
+ if cat1 in df.index and cat2 in df.columns:
1153
+ matrix[si, li] = df.loc[cat1, cat2]
1154
+
1155
+ im = ax.imshow(matrix, aspect='auto', cmap='RdYlBu_r', vmin=0.5, vmax=1.0)
1156
+ ax.set_yticks(range(len(available_scales)))
1157
+ ax.set_yticklabels(available_scales, fontsize=10)
1158
+
1159
+ step = max(1, len(all_layers) // 15)
1160
+ ax.set_xticks(range(0, len(all_layers), step))
1161
+ ax.set_xticklabels([str(all_layers[i]) for i in range(0, len(all_layers), step)], fontsize=8)
1162
+
1163
+ ax.set_title(label, fontsize=12, fontweight='bold')
1164
+ ax.set_xlabel('Layer Index', fontsize=10)
1165
+ fig.colorbar(im, ax=ax, label='Cosine Similarity', shrink=0.8)
1166
+
1167
+ fig.suptitle(
1168
+ f'{model_type.upper()} - Similarity Evolution (Layer x Scale)',
1169
+ fontsize=15, fontweight='bold', y=1.01
1170
+ )
1171
+ plt.tight_layout()
1172
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1173
+ plt.close()
1174
+ logger.info(f"Saved evolution heatmap: {save_path}")
1175
+
1176
+
1177
+ # ============================================================================
1178
+ # Comparison Visualizations (new for this experiment)
1179
+ # ============================================================================
1180
+
1181
+ def plot_accuracy_chart(
1182
+ accuracy_records: List[dict],
1183
+ model_type: str,
1184
+ save_path: str,
1185
+ ):
1186
+ """Bar chart of per-category accuracy across scales."""
1187
+ fig, ax = plt.subplots(figsize=(14, 6))
1188
+
1189
+ scales = [r['scale'] for r in accuracy_records]
1190
+ x = np.arange(len(CATEGORY_ORDER) + 1) # +1 for overall
1191
+ width = 0.8 / len(scales)
1192
+
1193
+ for i, record in enumerate(accuracy_records):
1194
+ values = [record.get(f'{cat}_accuracy', 0) for cat in CATEGORY_ORDER]
1195
+ values.append(record.get('overall_accuracy', 0))
1196
+ offset = (i - len(scales) / 2 + 0.5) * width
1197
+ color = SCALE_COLORS.get(record['scale'], 'gray')
1198
+ bars = ax.bar(x + offset, values, width, label=record['scale'], color=color)
1199
+
1200
+ for bar, val in zip(bars, values):
1201
+ if val > 0:
1202
+ ax.annotate(
1203
+ f'{val:.0%}',
1204
+ xy=(bar.get_x() + bar.get_width() / 2, bar.get_height()),
1205
+ xytext=(0, 2), textcoords='offset points',
1206
+ ha='center', va='bottom', fontsize=6, rotation=90,
1207
+ )
1208
+
1209
+ ax.set_ylabel('Accuracy')
1210
+ ax.set_title(f'{model_type.upper()} - Per-Category Accuracy Across Scales', fontsize=14, fontweight='bold')
1211
+ ax.set_xticks(x)
1212
+ ax.set_xticklabels(CATEGORY_ORDER + ['overall'])
1213
+ ax.legend(fontsize=9)
1214
+ ax.set_ylim(0, 1.15)
1215
+ ax.axhline(y=0.5, color='gray', linestyle='--', alpha=0.5, label='chance')
1216
+
1217
+ plt.tight_layout()
1218
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1219
+ plt.close()
1220
+ logger.info(f"Saved accuracy chart: {save_path}")
1221
+
1222
+
1223
+ def plot_correct_vs_incorrect_overlay(
1224
+ correct_sims: Dict[int, pd.DataFrame],
1225
+ incorrect_sims: Optional[Dict[int, pd.DataFrame]],
1226
+ scale: str,
1227
+ model_type: str,
1228
+ save_path: str,
1229
+ ):
1230
+ """Overlay correct vs incorrect similarity trajectories for key pairs."""
1231
+ pairs = [
1232
+ ('above', 'far', 'above-far'),
1233
+ ('under', 'close', 'under-close'),
1234
+ ('left', 'right', 'left-right'),
1235
+ ]
1236
+
1237
+ fig, axes = plt.subplots(1, len(pairs), figsize=(7 * len(pairs), 6))
1238
+ if len(pairs) == 1:
1239
+ axes = [axes]
1240
+
1241
+ for idx, (cat1, cat2, label) in enumerate(pairs):
1242
+ ax = axes[idx]
1243
+
1244
+ layers_c, vals_c = _extract_pair_trajectory(correct_sims, cat1, cat2)
1245
+ ax.plot(layers_c, vals_c, '-', color='#2ca02c', label='correct', linewidth=2)
1246
+
1247
+ if incorrect_sims:
1248
+ layers_i, vals_i = _extract_pair_trajectory(incorrect_sims, cat1, cat2)
1249
+ ax.plot(layers_i, vals_i, '-', color='#d62728', label='incorrect', linewidth=2)
1250
+
1251
+ ax.set_xlabel('Layer Index', fontsize=12)
1252
+ ax.set_ylabel('Cosine Similarity', fontsize=12)
1253
+ ax.set_title(f'{label}', fontsize=13, fontweight='bold')
1254
+ ax.legend(fontsize=10)
1255
+ ax.grid(True, alpha=0.3)
1256
+
1257
+ fig.suptitle(
1258
+ f'{model_type.upper()} ({scale}) - Correct vs Incorrect',
1259
+ fontsize=15, fontweight='bold', y=1.02
1260
+ )
1261
+ plt.tight_layout()
1262
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1263
+ plt.close()
1264
+ logger.info(f"Saved correct vs incorrect overlay: {save_path}")
1265
+
1266
+
1267
+ def plot_ablation_summary(
1268
+ ablation_data: List[dict],
1269
+ model_type: str,
1270
+ save_path: str,
1271
+ ):
1272
+ """Key ablation plot: correct-only vs all-samples similarity across scales.
1273
+
1274
+ x-axis = scales, two lines per pair:
1275
+ - solid: correct-only similarity
1276
+ - dashed: all-samples similarity (from the same data, no balanced sampling)
1277
+ """
1278
+ pairs = [
1279
+ ('above', 'far', 'above-far', '#d62728'),
1280
+ ('under', 'close', 'under-close', '#1f77b4'),
1281
+ ('left', 'right', 'left-right', '#2ca02c'),
1282
+ ]
1283
+
1284
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
1285
+
1286
+ fig, axes = plt.subplots(1, 2, figsize=(18, 7))
1287
+
1288
+ # Left panel: absolute similarities
1289
+ ax = axes[0]
1290
+ for cat1, cat2, label, color in pairs:
1291
+ # correct-only line
1292
+ x_vals, y_correct, y_all = [], [], []
1293
+ for i, scale in enumerate(scale_order):
1294
+ entry = next((d for d in ablation_data if d['scale'] == scale), None)
1295
+ if entry is None:
1296
+ continue
1297
+ sim_c = entry.get(f'correct_{cat1}_{cat2}')
1298
+ sim_a = entry.get(f'all_{cat1}_{cat2}')
1299
+ if sim_c is not None:
1300
+ x_vals.append(i)
1301
+ y_correct.append(sim_c)
1302
+ y_all.append(sim_a)
1303
+
1304
+ if x_vals:
1305
+ ax.plot(x_vals, y_correct, '-o', color=color, label=f'{label} (correct)', linewidth=2.5)
1306
+ ax.plot(x_vals, y_all, '--s', color=color, label=f'{label} (all)', linewidth=1.5, alpha=0.6)
1307
+
1308
+ ax.set_xticks(range(len(scale_order)))
1309
+ ax.set_xticklabels(scale_order, fontsize=10)
1310
+ ax.set_xlabel('Scale', fontsize=12)
1311
+ ax.set_ylabel('Cosine Similarity', fontsize=12)
1312
+ ax.set_title('Correct-Only vs All-Samples Similarity', fontsize=13, fontweight='bold')
1313
+ ax.legend(fontsize=8, loc='best')
1314
+ ax.grid(True, alpha=0.3)
1315
+
1316
+ # Right panel: accuracy overlay
1317
+ ax2 = axes[1]
1318
+ x_vals, acc_vals = [], []
1319
+ for i, scale in enumerate(scale_order):
1320
+ entry = next((d for d in ablation_data if d['scale'] == scale), None)
1321
+ if entry and 'accuracy' in entry:
1322
+ x_vals.append(i)
1323
+ acc_vals.append(entry['accuracy'])
1324
+
1325
+ ax2.bar(x_vals, acc_vals, color=[SCALE_COLORS.get(scale_order[x], 'gray') for x in x_vals], alpha=0.8)
1326
+ for x, acc in zip(x_vals, acc_vals):
1327
+ ax2.annotate(f'{acc:.1%}', xy=(x, acc), xytext=(0, 5), textcoords='offset points',
1328
+ ha='center', fontsize=10, fontweight='bold')
1329
+
1330
+ ax2.set_xticks(range(len(scale_order)))
1331
+ ax2.set_xticklabels(scale_order, fontsize=10)
1332
+ ax2.set_xlabel('Scale', fontsize=12)
1333
+ ax2.set_ylabel('Overall Accuracy', fontsize=12)
1334
+ ax2.set_title('Model Accuracy by Scale', fontsize=13, fontweight='bold')
1335
+ ax2.set_ylim(0, 1.15)
1336
+ ax2.grid(True, alpha=0.3, axis='y')
1337
+
1338
+ fig.suptitle(
1339
+ f'{model_type.upper()} - Ablation: Is Similarity Change Due to Accuracy?',
1340
+ fontsize=15, fontweight='bold', y=1.02
1341
+ )
1342
+ plt.tight_layout()
1343
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1344
+ plt.close()
1345
+ logger.info(f"Saved ablation summary: {save_path}")
1346
+
1347
+
1348
+ # ============================================================================
1349
+ # Model Configurations
1350
+ # ============================================================================
1351
+
1352
+ MODEL_CONFIGS = {
1353
+ 'molmo': {
1354
+ 'vanilla': 'allenai/Molmo-7B-O-0924',
1355
+ '80k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_80k/unshared',
1356
+ '400k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_400k/unshared',
1357
+ '800k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_800k/unshared',
1358
+ '2m': '/data/shared/Qwen/molmo/outputs/data_scale_exp_2m/unshared',
1359
+ },
1360
+ 'nvila': {
1361
+ 'vanilla': '/data/shared/Qwen/mydisk/NVILA-Lite-2B',
1362
+ '80k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_80K-20251108_180221',
1363
+ '400k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_400K-20251108_180221',
1364
+ '800k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_800K-20251108_180221',
1365
+ '2m': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_2M-20260205_003632',
1366
+ 'roborefer': '/data/shared/Qwen/mydisk/RoboRefer_model',
1367
+ },
1368
+ 'qwen': {
1369
+ 'vanilla': 'Qwen/Qwen2.5-VL-3B-Instruct',
1370
+ '80k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_80k-20251114_120221',
1371
+ '400k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_400k-20251114_120221',
1372
+ '800k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_800k-20251114_120221',
1373
+ '2m': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517',
1374
+ },
1375
+ }
1376
+
1377
+
1378
+ # ============================================================================
1379
+ # Main
1380
+ # ============================================================================
1381
+
1382
+ def process_subset(
1383
+ subset_name: str,
1384
+ all_layer_reps: Dict[int, Dict[str, np.ndarray]],
1385
+ target_layers: List[int],
1386
+ scale: str,
1387
+ model_type: str,
1388
+ output_dir: str,
1389
+ n_samples: int,
1390
+ ) -> Tuple[Dict[int, pd.DataFrame], List[dict]]:
1391
+ """Compute similarity matrices and save outputs for one subset (correct/incorrect)."""
1392
+ num_layers = len(target_layers)
1393
+ scale_sims = {}
1394
+ results_list = []
1395
+
1396
+ for layer_idx in sorted(all_layer_reps.keys()):
1397
+ reps = all_layer_reps[layer_idx]
1398
+ if len(reps) < 2:
1399
+ continue
1400
+
1401
+ sim_df = compute_similarity_matrix(reps)
1402
+ scale_sims[layer_idx] = sim_df
1403
+
1404
+ model_name = f"{model_type}_{scale}_{subset_name}"
1405
+ results = analyze_hypothesis(sim_df, model_name)
1406
+ results['layer_idx'] = layer_idx
1407
+ results['subset'] = subset_name
1408
+ results['scale'] = scale
1409
+ results['n_samples_per_cat'] = n_samples
1410
+ results_list.append(results)
1411
+
1412
+ sim_df.to_csv(os.path.join(output_dir, f'similarity_{scale}_L{layer_idx}.csv'))
1413
+
1414
+ if scale_sims:
1415
+ rep_layers = get_representative_layers(sorted(scale_sims.keys()))
1416
+ for layer_idx in rep_layers:
1417
+ sim_df = scale_sims[layer_idx]
1418
+ plot_similarity_heatmap(
1419
+ sim_df,
1420
+ f'{model_type.upper()} ({scale}) [{subset_name}, n={n_samples}] - Layer {layer_idx}/{num_layers-1}',
1421
+ os.path.join(output_dir, f'heatmap_{scale}_L{layer_idx}.png')
1422
+ )
1423
+
1424
+ plot_similarity_trajectories(
1425
+ scale_sims,
1426
+ f'{model_type.upper()} ({scale}) [{subset_name}, n={n_samples}]',
1427
+ os.path.join(output_dir, f'trajectory_{scale}.png')
1428
+ )
1429
+
1430
+ return scale_sims, results_list
1431
+
1432
+
1433
+ def _load_scale_sims_from_csvs(subset_dir: str, scale: str) -> Dict[int, pd.DataFrame]:
1434
+ """Reload per-layer similarity CSVs for one scale from disk."""
1435
+ import glob as glob_mod
1436
+ pattern = os.path.join(subset_dir, f'similarity_{scale}_L*.csv')
1437
+ files = glob_mod.glob(pattern)
1438
+ layer_sims = {}
1439
+ for fpath in files:
1440
+ basename = os.path.basename(fpath)
1441
+ # similarity_{scale}_L{idx}.csv
1442
+ layer_str = basename.replace(f'similarity_{scale}_L', '').replace('.csv', '')
1443
+ try:
1444
+ layer_idx = int(layer_str)
1445
+ except ValueError:
1446
+ continue
1447
+ df = pd.read_csv(fpath, index_col=0)
1448
+ layer_sims[layer_idx] = df
1449
+ return layer_sims
1450
+
1451
+
1452
+ def run_merge(
1453
+ model_type: str,
1454
+ scales: List[str],
1455
+ output_dir: str,
1456
+ correct_dir: str,
1457
+ incorrect_dir: str,
1458
+ accuracy_dir: str,
1459
+ comparison_dir: str,
1460
+ ):
1461
+ """Merge mode: read per-scale results and generate cross-scale plots."""
1462
+
1463
+ # Determine which scales have data
1464
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
1465
+ available_scales = [s for s in scale_order if s in scales]
1466
+
1467
+ # 1. Rebuild cross-scale similarity dicts from CSVs
1468
+ cross_scale_correct = {}
1469
+ cross_scale_incorrect = {}
1470
+ for scale in available_scales:
1471
+ c_sims = _load_scale_sims_from_csvs(correct_dir, scale)
1472
+ if c_sims:
1473
+ cross_scale_correct[scale] = c_sims
1474
+ logger.info(f" Loaded correct-only CSVs for {scale}: {len(c_sims)} layers")
1475
+
1476
+ i_sims = _load_scale_sims_from_csvs(incorrect_dir, scale)
1477
+ if i_sims:
1478
+ cross_scale_incorrect[scale] = i_sims
1479
+ logger.info(f" Loaded incorrect-only CSVs for {scale}: {len(i_sims)} layers")
1480
+
1481
+ # 2. Cross-scale trajectory and evolution heatmap
1482
+ if len(cross_scale_correct) > 1:
1483
+ logger.info("\n--- Cross-scale comparison (correct-only) ---")
1484
+ plot_cross_scale_trajectories(
1485
+ cross_scale_correct, model_type,
1486
+ os.path.join(comparison_dir, 'cross_scale_correct_only.png')
1487
+ )
1488
+ plot_similarity_evolution_heatmap(
1489
+ cross_scale_correct, model_type,
1490
+ os.path.join(comparison_dir, 'evolution_heatmap_correct.png')
1491
+ )
1492
+
1493
+ if len(cross_scale_incorrect) > 1:
1494
+ logger.info("\n--- Cross-scale comparison (incorrect-only) ---")
1495
+ plot_cross_scale_trajectories(
1496
+ cross_scale_incorrect, model_type,
1497
+ os.path.join(comparison_dir, 'cross_scale_incorrect_only.png')
1498
+ )
1499
+ plot_similarity_evolution_heatmap(
1500
+ cross_scale_incorrect, model_type,
1501
+ os.path.join(comparison_dir, 'evolution_heatmap_incorrect.png')
1502
+ )
1503
+
1504
+ # 3. Accuracy chart from per-scale JSONs
1505
+ accuracy_records = []
1506
+ for scale in available_scales:
1507
+ acc_path = os.path.join(accuracy_dir, f'accuracy_{scale}.json')
1508
+ if os.path.exists(acc_path):
1509
+ with open(acc_path) as f:
1510
+ accuracy_records.append(json.load(f))
1511
+
1512
+ if accuracy_records:
1513
+ acc_df = pd.DataFrame(accuracy_records)
1514
+ acc_df.to_csv(os.path.join(accuracy_dir, 'accuracy_summary.csv'), index=False)
1515
+ plot_accuracy_chart(accuracy_records, model_type,
1516
+ os.path.join(accuracy_dir, 'accuracy_chart.png'))
1517
+ logger.info(f" Saved merged accuracy summary ({len(accuracy_records)} scales)")
1518
+
1519
+ # 4. Ablation summary from per-scale JSONs
1520
+ ablation_data = []
1521
+ for scale in available_scales:
1522
+ abl_path = os.path.join(comparison_dir, f'ablation_{scale}.json')
1523
+ if os.path.exists(abl_path):
1524
+ with open(abl_path) as f:
1525
+ ablation_data.append(json.load(f))
1526
+
1527
+ if ablation_data:
1528
+ ablation_df = pd.DataFrame(ablation_data)
1529
+ ablation_df.to_csv(os.path.join(comparison_dir, 'ablation_summary.csv'), index=False)
1530
+ plot_ablation_summary(ablation_data, model_type,
1531
+ os.path.join(comparison_dir, 'ablation_summary.png'))
1532
+ logger.info(f" Saved merged ablation summary ({len(ablation_data)} scales)")
1533
+
1534
+ # 5. Merge per-scale results_summary CSVs
1535
+ import glob as glob_mod
1536
+ all_results_files = []
1537
+ for subset_dir, subset_name in [(correct_dir, 'correct'), (incorrect_dir, 'incorrect')]:
1538
+ for scale in available_scales:
1539
+ # Check if any similarity CSVs exist for this scale
1540
+ pattern = os.path.join(subset_dir, f'similarity_{scale}_L*.csv')
1541
+ if glob_mod.glob(pattern):
1542
+ all_results_files.append((subset_dir, scale, subset_name))
1543
+
1544
+ logger.info(f"\n=== Merge Complete ===")
1545
+ logger.info(f"Results in: {output_dir}")
1546
+
1547
+
1548
+ def main():
1549
+ parser = argparse.ArgumentParser(description='Experiment 2-A (Correct Filter): Correctness-Filtered Analysis')
1550
+ parser.add_argument('--data_path', type=str,
1551
+ default='/data/shared/Qwen/EmbSpatial-Bench/EmbSpatial-Bench.tsv')
1552
+ parser.add_argument('--model_type', type=str, required=True,
1553
+ choices=['molmo', 'nvila', 'qwen'])
1554
+ parser.add_argument('--scales', type=str, nargs='+',
1555
+ default=['vanilla', '80k', '400k', '800k', '2m'])
1556
+ parser.add_argument('--output_dir', type=str,
1557
+ default='/data/shared/Qwen/experiments/exp2a_correct_filter/results')
1558
+ parser.add_argument('--device', type=str, default='cuda')
1559
+ parser.add_argument('--seed', type=int, default=42)
1560
+ parser.add_argument('--merge', action='store_true',
1561
+ help='Merge mode: skip extraction, read existing per-scale results '
1562
+ 'and generate cross-scale comparison plots only.')
1563
+ parser.add_argument('--no-auto-roborefer', action='store_true', dest='no_auto_roborefer',
1564
+ help='Do not auto-add roborefer for nvila (use for parallel mode).')
1565
+
1566
+ args = parser.parse_args()
1567
+
1568
+ if args.model_type == 'nvila' and 'roborefer' not in args.scales and not args.no_auto_roborefer:
1569
+ args.scales.append('roborefer')
1570
+
1571
+ np.random.seed(args.seed)
1572
+ torch.manual_seed(args.seed)
1573
+ random.seed(args.seed)
1574
+
1575
+ output_dir = os.path.join(args.output_dir, args.model_type)
1576
+ correct_dir = os.path.join(output_dir, 'correct_only')
1577
+ incorrect_dir = os.path.join(output_dir, 'incorrect_only')
1578
+ accuracy_dir = os.path.join(output_dir, 'accuracy')
1579
+ comparison_dir = os.path.join(output_dir, 'comparison')
1580
+ for d in [correct_dir, incorrect_dir, accuracy_dir, comparison_dir]:
1581
+ os.makedirs(d, exist_ok=True)
1582
+
1583
+ # ------------------------------------------------------------------
1584
+ # Merge mode: read existing per-scale outputs and generate plots
1585
+ # ------------------------------------------------------------------
1586
+ if args.merge:
1587
+ logger.info("\n=== MERGE MODE: Reading existing per-scale results ===")
1588
+ run_merge(args.model_type, args.scales, output_dir,
1589
+ correct_dir, incorrect_dir, accuracy_dir, comparison_dir)
1590
+ return
1591
+
1592
+ # ------------------------------------------------------------------
1593
+ # Normal mode: extract + analyze
1594
+ # ------------------------------------------------------------------
1595
+ logger.info("\n=== Loading & Modifying EmbSpatialBench Data (ALL samples) ===")
1596
+ data = load_and_modify_data(args.data_path, args.seed)
1597
+
1598
+ model_configs = MODEL_CONFIGS[args.model_type]
1599
+
1600
+ all_results = []
1601
+ accuracy_records = []
1602
+ cross_scale_correct = {}
1603
+ cross_scale_incorrect = {}
1604
+ ablation_data = []
1605
+
1606
+ for scale in args.scales:
1607
+ if scale not in model_configs:
1608
+ logger.warning(f"Scale {scale} not available for {args.model_type}, skipping...")
1609
+ continue
1610
+
1611
+ model_path = model_configs[scale]
1612
+ if not os.path.exists(model_path) and not model_path.startswith('Qwen/') and not model_path.startswith('allenai/'):
1613
+ logger.warning(f"Model path not found: {model_path}, skipping...")
1614
+ continue
1615
+
1616
+ logger.info(f"\n{'='*60}")
1617
+ logger.info(f"Processing {args.model_type} - {scale}")
1618
+ logger.info(f"Model path: {model_path}")
1619
+ logger.info(f"{'='*60}")
1620
+
1621
+ try:
1622
+ extractor = get_extractor(
1623
+ args.model_type, model_path, scale=scale, device=args.device,
1624
+ )
1625
+ target_layers = extractor.target_layers
1626
+
1627
+ # Phase A: Extract all samples with predictions
1628
+ logger.info("\n--- Phase A: Extracting hidden states with predictions ---")
1629
+ sample_records = extract_all_with_predictions(extractor, data)
1630
+
1631
+ # Save per-sample predictions
1632
+ save_per_sample_predictions(
1633
+ sample_records, scale,
1634
+ os.path.join(accuracy_dir, f'predictions_{scale}.csv')
1635
+ )
1636
+
1637
+ # Compute and save accuracy
1638
+ acc_stats = compute_accuracy_stats(sample_records, scale, args.model_type)
1639
+ accuracy_records.append(acc_stats)
1640
+ logger.info(f"\n Accuracy for {scale}: {acc_stats['overall_accuracy']:.1%}")
1641
+ for cat in CATEGORY_ORDER:
1642
+ logger.info(f" {cat}: {acc_stats[f'{cat}_correct']}/{acc_stats[f'{cat}_total']} "
1643
+ f"= {acc_stats[f'{cat}_accuracy']:.1%}")
1644
+
1645
+ # Phase B: Balanced sampling
1646
+ logger.info("\n--- Phase B: Balanced sampling ---")
1647
+
1648
+ n_correct = compute_balanced_size(sample_records, filter_correct=True)
1649
+ n_incorrect = compute_balanced_size(sample_records, filter_correct=False)
1650
+ logger.info(f" Correct group: {n_correct} samples/category")
1651
+ logger.info(f" Incorrect group: {n_incorrect} samples/category")
1652
+
1653
+ # Also compute "all" (no filter) for ablation comparison using ALL samples
1654
+ logger.info("\n--- Computing all-samples similarity (unfiltered) ---")
1655
+ all_reps = {}
1656
+ for layer_idx in target_layers:
1657
+ cat_avgs = {}
1658
+ for cat in CATEGORY_ORDER:
1659
+ vectors = [r['hidden_states'][layer_idx]
1660
+ for r in sample_records.get(cat, [])
1661
+ if layer_idx in r['hidden_states']]
1662
+ if vectors:
1663
+ cat_avgs[cat] = np.mean(vectors, axis=0)
1664
+ if cat_avgs:
1665
+ all_reps[layer_idx] = cat_avgs
1666
+
1667
+ # Get "all" similarity at a representative deep layer for ablation
1668
+ all_sims_for_ablation = {}
1669
+ if all_reps:
1670
+ rep_layer = get_representative_layers(sorted(all_reps.keys()), n=1)[0]
1671
+ rep_sim_all = compute_similarity_matrix(all_reps[rep_layer])
1672
+ for cat1, cat2, _, _ in (TRAJECTORY_PAIRS['hypothesis'] +
1673
+ TRAJECTORY_PAIRS['within_axis']):
1674
+ if cat1 in rep_sim_all.index and cat2 in rep_sim_all.columns:
1675
+ all_sims_for_ablation[f'all_{cat1}_{cat2}'] = rep_sim_all.loc[cat1, cat2]
1676
+
1677
+ # Phase C: Process correct-only subset
1678
+ correct_layer_sims = {}
1679
+ if n_correct > 0:
1680
+ logger.info(f"\n--- Phase C: Processing correct-only (n={n_correct}) ---")
1681
+ correct_reps = balanced_sample_and_average(
1682
+ sample_records, filter_correct=True, n_samples=n_correct,
1683
+ target_layers=target_layers, seed=args.seed,
1684
+ )
1685
+
1686
+ correct_layer_sims, correct_results = process_subset(
1687
+ 'correct', correct_reps, target_layers, scale,
1688
+ args.model_type, correct_dir, n_correct,
1689
+ )
1690
+ all_results.extend(correct_results)
1691
+ cross_scale_correct[scale] = correct_layer_sims
1692
+ else:
1693
+ logger.warning(f" Skipping correct-only: no correct samples in some category")
1694
+
1695
+ # Process incorrect-only subset
1696
+ incorrect_layer_sims = {}
1697
+ if n_incorrect > 0:
1698
+ logger.info(f"\n--- Phase C: Processing incorrect-only (n={n_incorrect}) ---")
1699
+ incorrect_reps = balanced_sample_and_average(
1700
+ sample_records, filter_correct=False, n_samples=n_incorrect,
1701
+ target_layers=target_layers, seed=args.seed,
1702
+ )
1703
+
1704
+ incorrect_layer_sims, incorrect_results = process_subset(
1705
+ 'incorrect', incorrect_reps, target_layers, scale,
1706
+ args.model_type, incorrect_dir, n_incorrect,
1707
+ )
1708
+ all_results.extend(incorrect_results)
1709
+ cross_scale_incorrect[scale] = incorrect_layer_sims
1710
+ else:
1711
+ logger.warning(f" Skipping incorrect-only: no incorrect samples in some category")
1712
+
1713
+ # Correct vs incorrect overlay
1714
+ if correct_layer_sims:
1715
+ plot_correct_vs_incorrect_overlay(
1716
+ correct_layer_sims,
1717
+ incorrect_layer_sims if incorrect_layer_sims else None,
1718
+ scale, args.model_type,
1719
+ os.path.join(comparison_dir, f'correct_vs_incorrect_{scale}.png')
1720
+ )
1721
+
1722
+ # Build ablation entry
1723
+ ablation_entry = {
1724
+ 'scale': scale,
1725
+ 'accuracy': acc_stats['overall_accuracy'],
1726
+ 'n_correct_per_cat': n_correct,
1727
+ 'n_incorrect_per_cat': n_incorrect,
1728
+ }
1729
+ ablation_entry.update(all_sims_for_ablation)
1730
+
1731
+ # Get correct-only similarity at the same representative layer
1732
+ if correct_layer_sims and rep_layer in correct_layer_sims:
1733
+ rep_sim_c = correct_layer_sims[rep_layer]
1734
+ for cat1, cat2, _, _ in (TRAJECTORY_PAIRS['hypothesis'] +
1735
+ TRAJECTORY_PAIRS['within_axis']):
1736
+ if cat1 in rep_sim_c.index and cat2 in rep_sim_c.columns:
1737
+ ablation_entry[f'correct_{cat1}_{cat2}'] = rep_sim_c.loc[cat1, cat2]
1738
+
1739
+ # Get incorrect-only similarity
1740
+ if incorrect_layer_sims and rep_layer in incorrect_layer_sims:
1741
+ rep_sim_i = incorrect_layer_sims[rep_layer]
1742
+ for cat1, cat2, _, _ in (TRAJECTORY_PAIRS['hypothesis'] +
1743
+ TRAJECTORY_PAIRS['within_axis']):
1744
+ if cat1 in rep_sim_i.index and cat2 in rep_sim_i.columns:
1745
+ ablation_entry[f'incorrect_{cat1}_{cat2}'] = rep_sim_i.loc[cat1, cat2]
1746
+
1747
+ ablation_data.append(ablation_entry)
1748
+
1749
+ # Save per-scale ablation JSON (for merge mode)
1750
+ ablation_path = os.path.join(comparison_dir, f'ablation_{scale}.json')
1751
+ with open(ablation_path, 'w') as f:
1752
+ json.dump(ablation_entry, f, indent=2, default=str)
1753
+
1754
+ # Save per-scale accuracy JSON (for merge mode)
1755
+ acc_path = os.path.join(accuracy_dir, f'accuracy_{scale}.json')
1756
+ with open(acc_path, 'w') as f:
1757
+ json.dump(acc_stats, f, indent=2, default=str)
1758
+
1759
+ # Cleanup
1760
+ del sample_records
1761
+ extractor.cleanup()
1762
+
1763
+ except Exception as e:
1764
+ logger.error(f"Failed to process {args.model_type} - {scale}: {e}")
1765
+ import traceback
1766
+ traceback.print_exc()
1767
+ continue
1768
+
1769
+ # ========================
1770
+ # Cross-scale comparisons
1771
+ # ========================
1772
+
1773
+ if len(cross_scale_correct) > 1:
1774
+ logger.info("\n--- Cross-scale comparison (correct-only) ---")
1775
+ plot_cross_scale_trajectories(
1776
+ cross_scale_correct, args.model_type,
1777
+ os.path.join(comparison_dir, 'cross_scale_correct_only.png')
1778
+ )
1779
+ plot_similarity_evolution_heatmap(
1780
+ cross_scale_correct, args.model_type,
1781
+ os.path.join(comparison_dir, 'evolution_heatmap_correct.png')
1782
+ )
1783
+
1784
+ if len(cross_scale_incorrect) > 1:
1785
+ logger.info("\n--- Cross-scale comparison (incorrect-only) ---")
1786
+ plot_cross_scale_trajectories(
1787
+ cross_scale_incorrect, args.model_type,
1788
+ os.path.join(comparison_dir, 'cross_scale_incorrect_only.png')
1789
+ )
1790
+ plot_similarity_evolution_heatmap(
1791
+ cross_scale_incorrect, args.model_type,
1792
+ os.path.join(comparison_dir, 'evolution_heatmap_incorrect.png')
1793
+ )
1794
+
1795
+ # Accuracy chart
1796
+ if accuracy_records:
1797
+ acc_df = pd.DataFrame(accuracy_records)
1798
+ acc_df.to_csv(os.path.join(accuracy_dir, 'accuracy_summary.csv'), index=False)
1799
+ plot_accuracy_chart(accuracy_records, args.model_type,
1800
+ os.path.join(accuracy_dir, 'accuracy_chart.png'))
1801
+
1802
+ # Ablation summary
1803
+ if ablation_data:
1804
+ ablation_df = pd.DataFrame(ablation_data)
1805
+ ablation_df.to_csv(os.path.join(comparison_dir, 'ablation_summary.csv'), index=False)
1806
+ plot_ablation_summary(ablation_data, args.model_type,
1807
+ os.path.join(comparison_dir, 'ablation_summary.png'))
1808
+
1809
+ # Save all results
1810
+ if all_results:
1811
+ results_df = pd.DataFrame(all_results)
1812
+ results_df.to_csv(os.path.join(output_dir, 'results_summary.csv'), index=False)
1813
+
1814
+ logger.info(f"\n{'='*60}")
1815
+ logger.info("=== Analysis Complete ===")
1816
+ logger.info(f"Results saved to: {output_dir}")
1817
+ logger.info(f" Accuracy: {accuracy_dir}")
1818
+ logger.info(f" Correct-only: {correct_dir}")
1819
+ logger.info(f" Incorrect-only: {incorrect_dir}")
1820
+ logger.info(f" Comparison: {comparison_dir}")
1821
+ logger.info(f"{'='*60}")
1822
+
1823
+
1824
+ if __name__ == '__main__':
1825
+ main()
exp2a_correct_filter/run_molmo.sh ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ SCRIPT="/data/shared/Qwen/experiments/exp2a_correct_filter/exp2a_correct_filter_analysis.py"
5
+ PYTHON="conda run --no-capture-output -n molmo python"
6
+ MODEL="molmo"
7
+ LOG_DIR="/data/shared/Qwen/experiments/exp2a_correct_filter/logs/${MODEL}"
8
+ mkdir -p "$LOG_DIR"
9
+
10
+ SCALES=("vanilla" "80k" "400k" "800k" "2m")
11
+ GPUS=(0 1 2 3 4)
12
+
13
+ echo "========================================="
14
+ echo " Molmo: Launching ${#SCALES[@]} scales in parallel"
15
+ echo "========================================="
16
+
17
+ PIDS=()
18
+ for i in "${!SCALES[@]}"; do
19
+ scale="${SCALES[$i]}"
20
+ gpu="${GPUS[$i]}"
21
+ log="${LOG_DIR}/${scale}.log"
22
+
23
+ echo "[GPU $gpu] $scale -> $log"
24
+ CUDA_VISIBLE_DEVICES=$gpu $PYTHON $SCRIPT \
25
+ --model_type $MODEL \
26
+ --scales $scale \
27
+ --device cuda \
28
+ --no-auto-roborefer \
29
+ > "$log" 2>&1 &
30
+ PIDS+=($!)
31
+ done
32
+
33
+ echo ""
34
+ echo "Waiting for all ${#PIDS[@]} processes..."
35
+ echo "PIDs: ${PIDS[*]}"
36
+ echo ""
37
+
38
+ FAILED=0
39
+ for i in "${!PIDS[@]}"; do
40
+ pid="${PIDS[$i]}"
41
+ scale="${SCALES[$i]}"
42
+ if wait $pid; then
43
+ echo "[DONE] $scale (PID $pid) - SUCCESS"
44
+ else
45
+ echo "[FAIL] $scale (PID $pid) - EXIT CODE $?"
46
+ FAILED=$((FAILED + 1))
47
+ fi
48
+ done
49
+
50
+ echo ""
51
+ if [ $FAILED -gt 0 ]; then
52
+ echo "WARNING: $FAILED scale(s) failed. Check logs in $LOG_DIR"
53
+ fi
54
+
55
+ echo "========================================="
56
+ echo " Molmo: Running merge"
57
+ echo "========================================="
58
+ $PYTHON $SCRIPT --model_type $MODEL --merge 2>&1 | tee "${LOG_DIR}/merge.log"
59
+
60
+ echo ""
61
+ echo "ALL DONE: $MODEL"
62
+ echo "Results: /data/shared/Qwen/experiments/exp2a_correct_filter/results/${MODEL}/"
exp2a_correct_filter/run_nvila.sh ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ SCRIPT="/data/shared/Qwen/experiments/exp2a_correct_filter/exp2a_correct_filter_analysis.py"
5
+ PYTHON="conda run --no-capture-output -n vila python"
6
+ MODEL="nvila"
7
+ LOG_DIR="/data/shared/Qwen/experiments/exp2a_correct_filter/logs/${MODEL}"
8
+ mkdir -p "$LOG_DIR"
9
+
10
+ # NVILA has 6 scales (including roborefer)
11
+ SCALES=("vanilla" "80k" "400k" "800k" "2m" "roborefer")
12
+ GPUS=(0 1 2 3 4 5)
13
+
14
+ echo "========================================="
15
+ echo " NVILA: Launching ${#SCALES[@]} scales in parallel"
16
+ echo "========================================="
17
+
18
+ PIDS=()
19
+ for i in "${!SCALES[@]}"; do
20
+ scale="${SCALES[$i]}"
21
+ gpu="${GPUS[$i]}"
22
+ log="${LOG_DIR}/${scale}.log"
23
+
24
+ echo "[GPU $gpu] $scale -> $log"
25
+ CUDA_VISIBLE_DEVICES=$gpu $PYTHON $SCRIPT \
26
+ --model_type $MODEL \
27
+ --scales $scale \
28
+ --device cuda \
29
+ --no-auto-roborefer \
30
+ > "$log" 2>&1 &
31
+ PIDS+=($!)
32
+ done
33
+
34
+ echo ""
35
+ echo "Waiting for all ${#PIDS[@]} processes..."
36
+ echo "PIDs: ${PIDS[*]}"
37
+ echo ""
38
+
39
+ FAILED=0
40
+ for i in "${!PIDS[@]}"; do
41
+ pid="${PIDS[$i]}"
42
+ scale="${SCALES[$i]}"
43
+ if wait $pid; then
44
+ echo "[DONE] $scale (PID $pid) - SUCCESS"
45
+ else
46
+ echo "[FAIL] $scale (PID $pid) - EXIT CODE $?"
47
+ FAILED=$((FAILED + 1))
48
+ fi
49
+ done
50
+
51
+ echo ""
52
+ if [ $FAILED -gt 0 ]; then
53
+ echo "WARNING: $FAILED scale(s) failed. Check logs in $LOG_DIR"
54
+ fi
55
+
56
+ echo "========================================="
57
+ echo " NVILA: Running merge"
58
+ echo "========================================="
59
+ $PYTHON $SCRIPT --model_type $MODEL --merge 2>&1 | tee "${LOG_DIR}/merge.log"
60
+
61
+ echo ""
62
+ echo "ALL DONE: $MODEL"
63
+ echo "Results: /data/shared/Qwen/experiments/exp2a_correct_filter/results/${MODEL}/"
exp2a_correct_filter/run_qwen.sh ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+ set -e
3
+
4
+ SCRIPT="/data/shared/Qwen/experiments/exp2a_correct_filter/exp2a_correct_filter_analysis.py"
5
+ PYTHON="/usr/bin/python3"
6
+ MODEL="qwen"
7
+ LOG_DIR="/data/shared/Qwen/experiments/exp2a_correct_filter/logs/${MODEL}"
8
+ mkdir -p "$LOG_DIR"
9
+
10
+ SCALES=("vanilla" "80k" "400k" "800k" "2m")
11
+ GPUS=(0 1 2 3 4)
12
+
13
+ echo "========================================="
14
+ echo " Qwen: Launching ${#SCALES[@]} scales in parallel"
15
+ echo "========================================="
16
+
17
+ PIDS=()
18
+ for i in "${!SCALES[@]}"; do
19
+ scale="${SCALES[$i]}"
20
+ gpu="${GPUS[$i]}"
21
+ log="${LOG_DIR}/${scale}.log"
22
+
23
+ echo "[GPU $gpu] $scale -> $log"
24
+ CUDA_VISIBLE_DEVICES=$gpu $PYTHON $SCRIPT \
25
+ --model_type $MODEL \
26
+ --scales $scale \
27
+ --device cuda \
28
+ --no-auto-roborefer \
29
+ > "$log" 2>&1 &
30
+ PIDS+=($!)
31
+ done
32
+
33
+ echo ""
34
+ echo "Waiting for all ${#PIDS[@]} processes..."
35
+ echo "PIDs: ${PIDS[*]}"
36
+ echo ""
37
+
38
+ FAILED=0
39
+ for i in "${!PIDS[@]}"; do
40
+ pid="${PIDS[$i]}"
41
+ scale="${SCALES[$i]}"
42
+ if wait $pid; then
43
+ echo "[DONE] $scale (PID $pid) - SUCCESS"
44
+ else
45
+ echo "[FAIL] $scale (PID $pid) - EXIT CODE $?"
46
+ FAILED=$((FAILED + 1))
47
+ fi
48
+ done
49
+
50
+ echo ""
51
+ if [ $FAILED -gt 0 ]; then
52
+ echo "WARNING: $FAILED scale(s) failed. Check logs in $LOG_DIR"
53
+ fi
54
+
55
+ echo "========================================="
56
+ echo " Qwen: Running merge"
57
+ echo "========================================="
58
+ $PYTHON $SCRIPT --model_type $MODEL --merge 2>&1 | tee "${LOG_DIR}/merge.log"
59
+
60
+ echo ""
61
+ echo "ALL DONE: $MODEL"
62
+ echo "Results: /data/shared/Qwen/experiments/exp2a_correct_filter/results/${MODEL}/"
exp2a_modified/exp2a_modified_embedding_analysis.py ADDED
@@ -0,0 +1,1228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Experiment 2-A (Modified): Image-conditioned Representation Analysis
3
+
4
+ Modification from original:
5
+ - Remove task format confound by unifying answer format
6
+ - All answers are pure spatial concepts: left, right, above, under, far, close
7
+ - Pairwise: "Is the {obj1} to the left or right of the {obj2}?" -> "left"
8
+ - Distance: "Compared to {ref}, is {target} far or close from you?" -> "far"
9
+ - 200 samples per category (up from 50)
10
+
11
+ Goal: Verify Hypothesis 4 - that above/far and under/close are mapped to similar
12
+ positions in embedding space, while left/right are well-separated.
13
+ """
14
+
15
+ import os
16
+ import sys
17
+ import json
18
+ import argparse
19
+ import base64
20
+ import logging
21
+ import random
22
+ import re
23
+ from io import BytesIO
24
+ from collections import defaultdict
25
+ from typing import Dict, List, Tuple, Optional, Any
26
+ from abc import ABC, abstractmethod
27
+
28
+ import torch
29
+ import numpy as np
30
+ import pandas as pd
31
+ from PIL import Image
32
+ from tqdm import tqdm
33
+ import matplotlib.pyplot as plt
34
+ import seaborn as sns
35
+ from sklearn.metrics.pairwise import cosine_similarity
36
+
37
+ # Setup logging
38
+ logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
39
+ logger = logging.getLogger(__name__)
40
+
41
+ # Category order for output
42
+ CATEGORY_ORDER = ['left', 'right', 'above', 'under', 'far', 'close']
43
+
44
+ # Pair definitions for trajectory analysis
45
+ TRAJECTORY_PAIRS = {
46
+ 'hypothesis': [
47
+ ('above', 'far', 'above-far', '#d62728'), # red
48
+ ('under', 'close', 'under-close', '#1f77b4'), # blue
49
+ ],
50
+ 'within_axis': [
51
+ ('left', 'right', 'left-right', '#2ca02c'), # green
52
+ ('above', 'under', 'above-under', '#ff7f0e'), # orange
53
+ ('far', 'close', 'far-close', '#9467bd'), # purple
54
+ ],
55
+ 'counter_hypothesis': [
56
+ ('above', 'close', 'above-close', '#e377c2'), # pink
57
+ ('under', 'far', 'under-far', '#17becf'), # cyan
58
+ ],
59
+ }
60
+
61
+ # Scale colors for cross-scale plots
62
+ SCALE_COLORS = {
63
+ 'vanilla': '#1f77b4',
64
+ '80k': '#ff7f0e',
65
+ '400k': '#2ca02c',
66
+ '800k': '#d62728',
67
+ '2m': '#9467bd',
68
+ 'roborefer': '#8c564b',
69
+ }
70
+
71
+
72
+ # ============================================================================
73
+ # Data Loading & Modification
74
+ # ============================================================================
75
+
76
+ # Regex patterns for extracting objects from pairwise questions
77
+ OBJECT_PATTERNS = [
78
+ re.compile(r'between\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
79
+ re.compile(r'of\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
80
+ re.compile(r'positions\s+of\s+(.+?)\s+and\s+(.+?)\s+interact', re.IGNORECASE),
81
+ re.compile(r'How\s+are\s+(.+?)\s+and\s+(.+?)\s+positioned', re.IGNORECASE),
82
+ re.compile(r'arrangement\s+of\s+(.+?)\s+and\s+(.+?)\s+in', re.IGNORECASE),
83
+ ]
84
+
85
+
86
+ def extract_objects(question: str) -> Tuple[str, str]:
87
+ """Extract two objects from a pairwise relation question."""
88
+ for pattern in OBJECT_PATTERNS:
89
+ m = pattern.search(question)
90
+ if m:
91
+ return m.group(1).strip(), m.group(2).strip()
92
+ raise ValueError(f"Could not extract objects from: {question}")
93
+
94
+
95
+ def modify_pairwise_sample(sample: dict) -> dict:
96
+ """Modify a pairwise relation sample (left/right/above/under)."""
97
+ obj1, obj2 = extract_objects(sample['question'])
98
+ category = sample['category']
99
+
100
+ if category in ['left', 'right']:
101
+ new_question = f"Is the {obj1} to the left or right of the {obj2}?"
102
+ else: # above, under
103
+ new_question = f"Is the {obj1} above or under the {obj2}?"
104
+
105
+ return {
106
+ 'index': sample['index'],
107
+ 'image_base64': sample['image_base64'],
108
+ 'question': new_question,
109
+ 'answer': category,
110
+ 'category': category,
111
+ }
112
+
113
+
114
+ def modify_distance_sample(sample: dict, rng: random.Random) -> dict:
115
+ """Modify a distance relation sample (far/close)."""
116
+ category = sample['category']
117
+ answer_key = sample['answer'] # e.g. "C"
118
+ options = sample['options'] # {'A': 'table', 'B': 'towel', ...}
119
+
120
+ target_object = options[answer_key]
121
+ candidates = [v for k, v in options.items() if k != answer_key]
122
+ reference_object = rng.choice(candidates)
123
+
124
+ new_question = f"Compared to {reference_object}, is {target_object} far or close from you?"
125
+
126
+ return {
127
+ 'index': sample['index'],
128
+ 'image_base64': sample['image_base64'],
129
+ 'question': new_question,
130
+ 'answer': category,
131
+ 'category': category,
132
+ }
133
+
134
+
135
+ def load_and_modify_data(
136
+ tsv_path: str,
137
+ samples_per_category: int = 200,
138
+ seed: int = 42
139
+ ) -> Dict[str, List[dict]]:
140
+ """
141
+ Load EmbSpatialBench data, modify questions to remove format confound.
142
+ """
143
+ rng = random.Random(seed)
144
+ np.random.seed(seed)
145
+
146
+ df = pd.read_csv(tsv_path, sep='\t')
147
+
148
+ # Group by category
149
+ raw_grouped = defaultdict(list)
150
+ for _, row in df.iterrows():
151
+ category = row['category']
152
+ sample = {
153
+ 'index': row['index'],
154
+ 'image_base64': row['image'],
155
+ 'question': row['question'],
156
+ 'answer': row['answer'],
157
+ 'category': category,
158
+ 'options': {
159
+ 'A': row['A'],
160
+ 'B': row['B'],
161
+ 'C': row['C'],
162
+ 'D': row['D']
163
+ }
164
+ }
165
+ raw_grouped[category].append(sample)
166
+
167
+ # Sample and modify
168
+ modified_data = defaultdict(list)
169
+ stats = {'total': 0, 'success': 0, 'failed': 0}
170
+
171
+ for category in CATEGORY_ORDER:
172
+ samples = raw_grouped[category]
173
+
174
+ # Sample up to samples_per_category
175
+ if len(samples) > samples_per_category:
176
+ indices = np.random.choice(len(samples), samples_per_category, replace=False)
177
+ samples = [samples[i] for i in indices]
178
+
179
+ for sample in samples:
180
+ stats['total'] += 1
181
+ try:
182
+ if category in ['left', 'right', 'above', 'under']:
183
+ modified = modify_pairwise_sample(sample)
184
+ else: # far, close
185
+ modified = modify_distance_sample(sample, rng)
186
+
187
+ # Validate
188
+ assert modified['answer'] == modified['category']
189
+ modified_data[category].append(modified)
190
+ stats['success'] += 1
191
+ except Exception as e:
192
+ stats['failed'] += 1
193
+ logger.warning(f" Failed to modify sample {sample['index']}: {e}")
194
+
195
+ logger.info(f"Data modification: {stats['success']}/{stats['total']} success, {stats['failed']} failed")
196
+ for cat in CATEGORY_ORDER:
197
+ if cat in modified_data:
198
+ logger.info(f" {cat}: {len(modified_data[cat])} samples")
199
+ # Show first example
200
+ ex = modified_data[cat][0]
201
+ logger.info(f" Example Q: {ex['question']}")
202
+ logger.info(f" Example A: {ex['answer']}")
203
+
204
+ return dict(modified_data)
205
+
206
+
207
+ def decode_base64_image(base64_str: str) -> Image.Image:
208
+ """Decode base64 string to PIL Image."""
209
+ image_data = base64.b64decode(base64_str)
210
+ return Image.open(BytesIO(image_data)).convert('RGB')
211
+
212
+
213
+ # ============================================================================
214
+ # Base Extractor
215
+ # ============================================================================
216
+
217
+ class BaseHiddenStateExtractor(ABC):
218
+ """Base class for extracting hidden states from VLMs."""
219
+
220
+ def __init__(self, model_path: str, device: str = 'cuda', target_layers: List[int] = None):
221
+ self.model_path = model_path
222
+ self.device = device
223
+ self.hidden_states = {}
224
+ self.hooks = []
225
+
226
+ self._load_model()
227
+
228
+ num_layers = self._get_num_layers()
229
+ if target_layers is None:
230
+ self.target_layers = list(range(num_layers))
231
+ logger.info(f"Model has {num_layers} layers. Extracting ALL layers (0..{num_layers-1})")
232
+ else:
233
+ self.target_layers = target_layers
234
+ logger.info(f"Model has {num_layers} layers. Target layers: {self.target_layers}")
235
+
236
+ self._register_hooks()
237
+
238
+ def _register_hooks(self):
239
+ """Register forward hooks on target layers."""
240
+ for layer_idx in self.target_layers:
241
+ module = self._get_layer_module(layer_idx)
242
+ if module is not None:
243
+ hook = module.register_forward_hook(self._make_hook(layer_idx))
244
+ self.hooks.append(hook)
245
+ logger.info(f" Registered hook on layer {layer_idx}")
246
+
247
+ def _make_hook(self, layer_idx: int):
248
+ """Create a hook function for a specific layer."""
249
+ def hook_fn(module, input, output):
250
+ if isinstance(output, tuple):
251
+ hidden = output[0]
252
+ else:
253
+ hidden = output
254
+
255
+ # Last token pooling
256
+ last_token = hidden[:, -1, :].detach().cpu().float()
257
+ self.hidden_states[layer_idx] = last_token.squeeze(0)
258
+
259
+ return hook_fn
260
+
261
+ @abstractmethod
262
+ def _load_model(self):
263
+ pass
264
+
265
+ @abstractmethod
266
+ def _get_num_layers(self) -> int:
267
+ pass
268
+
269
+ @abstractmethod
270
+ def _get_layer_module(self, layer_idx: int):
271
+ pass
272
+
273
+ @abstractmethod
274
+ def extract(self, image: Image.Image, question: str) -> Dict[int, torch.Tensor]:
275
+ pass
276
+
277
+ def cleanup(self):
278
+ """Remove hooks and free memory."""
279
+ for hook in self.hooks:
280
+ hook.remove()
281
+ self.hooks = []
282
+ if hasattr(self, 'model'):
283
+ del self.model
284
+ if hasattr(self, 'processor'):
285
+ del self.processor
286
+ torch.cuda.empty_cache()
287
+
288
+
289
+ # ============================================================================
290
+ # Molmo Extractor
291
+ # ============================================================================
292
+
293
+ class MolmoExtractor(BaseHiddenStateExtractor):
294
+ """Hidden state extractor for Molmo models (native olmo format)."""
295
+
296
+ def _load_model(self):
297
+ config_path = os.path.join(self.model_path, "config.yaml")
298
+ checkpoint_path = os.path.join(self.model_path, "model.pt")
299
+
300
+ if os.path.exists(config_path) and os.path.exists(checkpoint_path):
301
+ self._load_native_model()
302
+ self.is_native = True
303
+ else:
304
+ self._load_hf_model()
305
+ self.is_native = False
306
+
307
+ def _load_native_model(self):
308
+ from olmo.config import ModelConfig
309
+ from olmo.model import Molmo as NativeMolmoModel
310
+ from olmo.data.model_preprocessor import MultiModalPreprocessor
311
+ from olmo.data.data_formatter import DataFormatter
312
+
313
+ _original_load = torch.load
314
+ def _unsafe_load_wrapper(*args, **kwargs):
315
+ if 'weights_only' not in kwargs:
316
+ kwargs['weights_only'] = False
317
+ return _original_load(*args, **kwargs)
318
+ torch.load = _unsafe_load_wrapper
319
+
320
+ config_path = os.path.join(self.model_path, "config.yaml")
321
+ checkpoint_path = os.path.join(self.model_path, "model.pt")
322
+
323
+ cfg = ModelConfig.load(config_path, key="model", validate_paths=False)
324
+ cfg.init_device = "cpu"
325
+
326
+ self.model = NativeMolmoModel(cfg)
327
+ state_dict = torch.load(checkpoint_path, map_location="cpu")
328
+ self.model.load_state_dict(state_dict)
329
+ self.model = self.model.to(self.device, dtype=torch.bfloat16).eval()
330
+
331
+ self.tokenizer = cfg.get_tokenizer()
332
+ v_cfg = cfg.vision_backbone
333
+ h, w = cfg.llm_patches_per_crop()
334
+ image_padding_mask = 2 if cfg.fix_image_padding else (1 if cfg.image_padding_embed else None)
335
+
336
+ class SafeDataFormatter(DataFormatter):
337
+ def get_system_prompt(self, style, for_inference, messages, rng=None):
338
+ if style is None:
339
+ style = "User"
340
+ return super().get_system_prompt(style, for_inference, messages, rng)
341
+
342
+ self.formatter = SafeDataFormatter(
343
+ prompt_templates=cfg.prompt_type,
344
+ message_format=cfg.message_formatting,
345
+ system_prompt=cfg.system_prompt_kind,
346
+ always_start_with_space=cfg.always_start_with_space,
347
+ default_inference_len=cfg.default_inference_len
348
+ )
349
+
350
+ self.preprocessor = MultiModalPreprocessor(
351
+ tokenizer=self.tokenizer,
352
+ normalize=str(v_cfg.image_model_type),
353
+ crop_mode=cfg.crop_mode,
354
+ max_crops=cfg.max_crops,
355
+ overlap_margins=cfg.overlap_margins,
356
+ resize=v_cfg.resize_mode,
357
+ use_col_tokens=cfg.use_col_tokens,
358
+ base_image_input_size=v_cfg.image_default_input_size,
359
+ image_pooling_w=cfg.image_pooling_w,
360
+ image_pooling_h=cfg.image_pooling_h,
361
+ image_token_length_w=w,
362
+ image_token_length_h=h,
363
+ image_patch_size=v_cfg.image_patch_size,
364
+ image_padding_mask=image_padding_mask,
365
+ pad_value=cfg.pad_value,
366
+ loss_token_weighting=cfg.multi_annotation_weighting,
367
+ )
368
+
369
+ logger.info(f"Loaded native Molmo model from {self.model_path}")
370
+
371
+ def _load_hf_model(self):
372
+ from transformers import AutoModelForCausalLM, AutoProcessor
373
+
374
+ self.model = AutoModelForCausalLM.from_pretrained(
375
+ self.model_path,
376
+ torch_dtype=torch.bfloat16,
377
+ trust_remote_code=True,
378
+ device_map=self.device
379
+ )
380
+ self.model.eval()
381
+
382
+ self.processor = AutoProcessor.from_pretrained(
383
+ self.model_path,
384
+ trust_remote_code=True
385
+ )
386
+ logger.info(f"Loaded HuggingFace Molmo model from {self.model_path}")
387
+
388
+ def _get_num_layers(self) -> int:
389
+ if self.is_native:
390
+ return len(self.model.transformer.blocks)
391
+ else:
392
+ if hasattr(self.model, 'model') and hasattr(self.model.model, 'transformer'):
393
+ return len(self.model.model.transformer.blocks)
394
+ return 32
395
+
396
+ def _get_layer_module(self, layer_idx: int):
397
+ if self.is_native:
398
+ return self.model.transformer.blocks[layer_idx]
399
+ else:
400
+ return self.model.model.transformer.blocks[layer_idx]
401
+
402
+ def extract(self, image: Image.Image, question: str) -> Dict[int, torch.Tensor]:
403
+ self.hidden_states = {}
404
+
405
+ if self.is_native:
406
+ example = {"messages": [question], "image": image}
407
+ messages, _ = self.formatter(example, is_training=False, for_inference=True, rng=np.random)
408
+ image_np = np.array(image)
409
+ batch = self.preprocessor(image_np, messages, is_training=False, require_image_features=True)
410
+
411
+ if 'input_ids' not in batch and 'input_tokens' in batch:
412
+ batch['input_ids'] = batch['input_tokens']
413
+
414
+ def to_tensor(x):
415
+ if isinstance(x, np.ndarray):
416
+ return torch.from_numpy(x)
417
+ return x
418
+
419
+ input_ids = to_tensor(batch['input_ids']).unsqueeze(0).to(self.device)
420
+ if input_ids.dtype not in [torch.long, torch.int64]:
421
+ input_ids = input_ids.long()
422
+
423
+ images_tensor = to_tensor(batch['images']).unsqueeze(0).to(self.device).to(dtype=torch.bfloat16)
424
+ image_masks = to_tensor(batch['image_masks']).unsqueeze(0).to(self.device).to(dtype=torch.bfloat16)
425
+ image_input_idx = to_tensor(batch['image_input_idx']).unsqueeze(0).to(self.device)
426
+
427
+ with torch.inference_mode():
428
+ with torch.autocast(device_type="cuda", enabled=True, dtype=torch.bfloat16):
429
+ _ = self.model(
430
+ input_ids=input_ids,
431
+ images=images_tensor,
432
+ image_masks=image_masks,
433
+ image_input_idx=image_input_idx,
434
+ )
435
+ else:
436
+ inputs = self.processor.process(images=[image], text=question)
437
+ processed_inputs = {}
438
+ for k, v in inputs.items():
439
+ v = v.to(self.device).unsqueeze(0)
440
+ if v.dtype == torch.float32:
441
+ v = v.to(dtype=torch.bfloat16)
442
+ processed_inputs[k] = v
443
+
444
+ with torch.no_grad():
445
+ _ = self.model(**processed_inputs)
446
+
447
+ return self.hidden_states.copy()
448
+
449
+
450
+ # ============================================================================
451
+ # NVILA Extractor
452
+ # ============================================================================
453
+
454
+ class NVILAExtractor(BaseHiddenStateExtractor):
455
+ """Hidden state extractor for NVILA models."""
456
+
457
+ def _load_model(self):
458
+ original_sys_path = sys.path.copy()
459
+ sys.path = [p for p in sys.path if 'RoboRefer' not in p]
460
+
461
+ modules_to_remove = [key for key in list(sys.modules.keys()) if 'llava' in key.lower()]
462
+ removed_modules = {}
463
+ for mod in modules_to_remove:
464
+ removed_modules[mod] = sys.modules.pop(mod)
465
+
466
+ try:
467
+ import llava
468
+ from llava.media import Image as LLaVAImage
469
+ from llava import conversation as clib
470
+ except Exception as err:
471
+ sys.path = original_sys_path
472
+ for mod, module in removed_modules.items():
473
+ sys.modules[mod] = module
474
+ raise RuntimeError(f"Failed to import llava: {err}")
475
+
476
+ sys.path = original_sys_path
477
+
478
+ self.LLaVAImage = LLaVAImage
479
+ self.clib = clib
480
+
481
+ self.model = llava.load(self.model_path, model_base=None)
482
+
483
+ self._find_llm_backbone()
484
+
485
+ logger.info(f"Loaded NVILA model from {self.model_path}")
486
+
487
+ def _find_llm_backbone(self):
488
+ """Find the LLM backbone module for hook registration."""
489
+ candidates = []
490
+
491
+ if hasattr(self.model, 'llm'):
492
+ if hasattr(self.model.llm, 'model') and hasattr(self.model.llm.model, 'layers'):
493
+ candidates.append(('model.llm.model.layers', self.model.llm.model.layers))
494
+ if hasattr(self.model.llm, 'layers'):
495
+ candidates.append(('model.llm.layers', self.model.llm.layers))
496
+
497
+ if hasattr(self.model, 'model'):
498
+ if hasattr(self.model.model, 'model') and hasattr(self.model.model.model, 'layers'):
499
+ candidates.append(('model.model.model.layers', self.model.model.model.layers))
500
+ if hasattr(self.model.model, 'layers'):
501
+ candidates.append(('model.model.layers', self.model.model.layers))
502
+
503
+ for name, module in self.model.named_modules():
504
+ if name.endswith('.layers') and hasattr(module, '__len__') and len(module) > 0:
505
+ candidates.append((name, module))
506
+
507
+ if candidates:
508
+ path, layers = candidates[0]
509
+ logger.info(f"Found LLM layers at: {path} (num_layers={len(layers)})")
510
+ self.llm_backbone = layers
511
+ self.layers_path = path
512
+ else:
513
+ logger.error("Could not find transformer layers in model!")
514
+ for name, _ in list(self.model.named_modules())[:20]:
515
+ logger.info(f" {name}")
516
+ raise ValueError("Could not locate transformer layers in NVILA model")
517
+
518
+ def _get_num_layers(self) -> int:
519
+ if hasattr(self, 'llm_backbone') and hasattr(self.llm_backbone, '__len__'):
520
+ return len(self.llm_backbone)
521
+ return 24
522
+
523
+ def _get_layer_module(self, layer_idx: int):
524
+ if hasattr(self, 'llm_backbone') and hasattr(self.llm_backbone, '__getitem__'):
525
+ module = self.llm_backbone[layer_idx]
526
+ logger.info(f" Accessing layer {layer_idx}: {type(module).__name__}")
527
+ return module
528
+ logger.error(f"Cannot access layer {layer_idx} - llm_backbone not properly initialized")
529
+ return None
530
+
531
+ def extract(self, image: Image.Image, question: str) -> Dict[int, torch.Tensor]:
532
+ self.hidden_states = {}
533
+
534
+ import tempfile
535
+ with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as f:
536
+ temp_path = f.name
537
+ image.save(temp_path)
538
+
539
+ try:
540
+ prompt = [self.LLaVAImage(temp_path), question]
541
+
542
+ from transformers import GenerationConfig
543
+ gen_config = GenerationConfig(max_new_tokens=1, do_sample=False)
544
+ _ = self.model.generate_content(prompt, generation_config=gen_config)
545
+ finally:
546
+ os.unlink(temp_path)
547
+
548
+ return self.hidden_states.copy()
549
+
550
+
551
+ # ============================================================================
552
+ # RoboRefer Extractor (NVILA-based)
553
+ # ============================================================================
554
+
555
+ class RoboReferExtractor(NVILAExtractor):
556
+ """Hidden state extractor for RoboRefer models (NVILA-based, different llava path)."""
557
+
558
+ ROBOREFER_PATH = '/data/shared/Qwen/RoboRefer'
559
+
560
+ def _load_model(self):
561
+ original_sys_path = sys.path.copy()
562
+
563
+ # Add RoboRefer path (opposite of NVILA which removes it)
564
+ if self.ROBOREFER_PATH not in sys.path:
565
+ sys.path.insert(0, self.ROBOREFER_PATH)
566
+
567
+ # Clear any existing llava modules to avoid conflicts
568
+ modules_to_remove = [key for key in list(sys.modules.keys()) if 'llava' in key.lower()]
569
+ removed_modules = {}
570
+ for mod in modules_to_remove:
571
+ removed_modules[mod] = sys.modules.pop(mod)
572
+
573
+ try:
574
+ import llava
575
+ from llava.media import Image as LLaVAImage
576
+ from llava import conversation as clib
577
+ except Exception as err:
578
+ sys.path = original_sys_path
579
+ for mod, module in removed_modules.items():
580
+ sys.modules[mod] = module
581
+ raise RuntimeError(f"Failed to import RoboRefer llava: {err}")
582
+
583
+ sys.path = original_sys_path
584
+
585
+ self.LLaVAImage = LLaVAImage
586
+ self.clib = clib
587
+
588
+ self.model = llava.load(self.model_path, model_base=None)
589
+
590
+ self._find_llm_backbone()
591
+
592
+ logger.info(f"Loaded RoboRefer model from {self.model_path}")
593
+
594
+
595
+ # ============================================================================
596
+ # Qwen2.5-VL Extractor
597
+ # ============================================================================
598
+
599
+ class Qwen25VLExtractor(BaseHiddenStateExtractor):
600
+ """Hidden state extractor for Qwen2.5-VL models."""
601
+
602
+ BASE_MODEL = "Qwen/Qwen2.5-VL-3B-Instruct"
603
+
604
+ def _load_model(self):
605
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
606
+
607
+ try:
608
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
609
+ self.model_path,
610
+ torch_dtype=torch.bfloat16,
611
+ device_map=self.device
612
+ )
613
+ except ImportError:
614
+ logger.info("accelerate not available, loading model without device_map...")
615
+ self.model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
616
+ self.model_path,
617
+ torch_dtype=torch.bfloat16,
618
+ )
619
+ self.model = self.model.to(self.device)
620
+
621
+ self.model.eval()
622
+
623
+ if self.model_path.startswith('/'):
624
+ logger.info(f"Fine-tuned model detected, loading processor from base model: {self.BASE_MODEL}")
625
+ self.processor = AutoProcessor.from_pretrained(self.BASE_MODEL)
626
+ else:
627
+ self.processor = AutoProcessor.from_pretrained(self.model_path)
628
+ logger.info(f"Loaded Qwen2.5-VL model from {self.model_path}")
629
+
630
+ def _get_num_layers(self) -> int:
631
+ return len(self.model.model.layers)
632
+
633
+ def _get_layer_module(self, layer_idx: int):
634
+ return self.model.model.layers[layer_idx]
635
+
636
+ def extract(self, image: Image.Image, question: str) -> Dict[int, torch.Tensor]:
637
+ self.hidden_states = {}
638
+
639
+ messages = [
640
+ {
641
+ "role": "user",
642
+ "content": [
643
+ {"type": "image", "image": image},
644
+ {"type": "text", "text": question}
645
+ ]
646
+ }
647
+ ]
648
+
649
+ text = self.processor.apply_chat_template(
650
+ messages, tokenize=False, add_generation_prompt=True
651
+ )
652
+
653
+ from qwen_vl_utils import process_vision_info
654
+ image_inputs, video_inputs = process_vision_info(messages)
655
+
656
+ inputs = self.processor(
657
+ text=[text],
658
+ images=image_inputs,
659
+ videos=video_inputs,
660
+ padding=True,
661
+ return_tensors="pt"
662
+ )
663
+ inputs = inputs.to(self.device)
664
+
665
+ with torch.no_grad():
666
+ _ = self.model(**inputs)
667
+
668
+ return self.hidden_states.copy()
669
+
670
+
671
+ # ============================================================================
672
+ # Factory Function
673
+ # ============================================================================
674
+
675
+ def get_extractor(model_type: str, model_path: str, scale: str = None, **kwargs) -> BaseHiddenStateExtractor:
676
+ # RoboRefer uses NVILA architecture but needs different llava import path
677
+ if model_type == 'nvila' and scale == 'roborefer':
678
+ return RoboReferExtractor(model_path, **kwargs)
679
+
680
+ extractors = {
681
+ 'molmo': MolmoExtractor,
682
+ 'nvila': NVILAExtractor,
683
+ 'qwen': Qwen25VLExtractor,
684
+ }
685
+ if model_type not in extractors:
686
+ raise ValueError(f"Unknown model type: {model_type}. Available: {list(extractors.keys())}")
687
+ return extractors[model_type](model_path, **kwargs)
688
+
689
+
690
+ # ============================================================================
691
+ # Analysis Functions
692
+ # ============================================================================
693
+
694
+ def extract_all_layer_representations(
695
+ extractor: BaseHiddenStateExtractor,
696
+ data: Dict[str, List[dict]],
697
+ ) -> Dict[int, Dict[str, np.ndarray]]:
698
+ """Extract average hidden state representations for ALL target layers at once.
699
+
700
+ Returns:
701
+ Dict mapping layer_idx -> {category -> avg_vector}
702
+ """
703
+ # category_states[layer_idx][category] = list of vectors
704
+ category_states = defaultdict(lambda: defaultdict(list))
705
+
706
+ for category in CATEGORY_ORDER:
707
+ if category not in data:
708
+ continue
709
+ samples = data[category]
710
+ logger.info(f"Processing category: {category}")
711
+ success_count = 0
712
+ for sample in tqdm(samples, desc=f" {category}"):
713
+ try:
714
+ image = decode_base64_image(sample['image_base64'])
715
+ hidden_states = extractor.extract(image, sample['question'])
716
+
717
+ for layer_idx in extractor.target_layers:
718
+ if layer_idx in hidden_states:
719
+ state = hidden_states[layer_idx].numpy().flatten()
720
+ if state.size > 0:
721
+ category_states[layer_idx][category].append(state)
722
+
723
+ if any(l in hidden_states for l in extractor.target_layers):
724
+ success_count += 1
725
+ else:
726
+ logger.warning(f" No target layers found. Available: {list(hidden_states.keys())}")
727
+ except Exception as e:
728
+ logger.warning(f" Error processing sample {sample['index']}: {e}")
729
+ continue
730
+
731
+ logger.info(f" {category}: Successfully extracted {success_count}/{len(samples)} samples")
732
+
733
+ # Average per category per layer
734
+ result = {}
735
+ for layer_idx in extractor.target_layers:
736
+ category_avg = {}
737
+ for category, states in category_states[layer_idx].items():
738
+ if states:
739
+ category_avg[category] = np.mean(states, axis=0)
740
+ if category_avg:
741
+ result[layer_idx] = category_avg
742
+ logger.info(f" Layer {layer_idx}: {len(category_avg)} categories collected")
743
+ else:
744
+ logger.error(f" Layer {layer_idx}: No states collected!")
745
+
746
+ if not result:
747
+ raise ValueError("No representations were extracted!")
748
+
749
+ return result
750
+
751
+
752
+ def compute_similarity_matrix(
753
+ representations: Dict[str, np.ndarray]
754
+ ) -> pd.DataFrame:
755
+ """Compute pairwise cosine similarity with fixed category order."""
756
+ available = [c for c in CATEGORY_ORDER if c in representations]
757
+ vectors = np.array([representations[cat] for cat in available])
758
+ sim_matrix = cosine_similarity(vectors)
759
+ return pd.DataFrame(sim_matrix, index=available, columns=available)
760
+
761
+
762
+ def analyze_hypothesis(sim_df: pd.DataFrame, model_name: str) -> dict:
763
+ """Analyze the similarity matrix to test Hypothesis 4."""
764
+ results = {'model': model_name}
765
+
766
+ pairs_to_check = {
767
+ 'above_far': ('above', 'far'),
768
+ 'under_close': ('under', 'close'),
769
+ 'left_right': ('left', 'right'),
770
+ }
771
+
772
+ for pair_name, (cat1, cat2) in pairs_to_check.items():
773
+ if cat1 in sim_df.index and cat2 in sim_df.columns:
774
+ sim = sim_df.loc[cat1, cat2]
775
+ results[f'sim_{pair_name}'] = sim
776
+ logger.info(f" {pair_name}: sim({cat1}, {cat2}) = {sim:.4f}")
777
+ else:
778
+ results[f'sim_{pair_name}'] = None
779
+
780
+ if results.get('sim_above_far') and results.get('sim_left_right'):
781
+ results['diff_above_far_vs_left_right'] = results['sim_above_far'] - results['sim_left_right']
782
+ if results.get('sim_under_close') and results.get('sim_left_right'):
783
+ results['diff_under_close_vs_left_right'] = results['sim_under_close'] - results['sim_left_right']
784
+
785
+ return results
786
+
787
+
788
+ # ============================================================================
789
+ # Visualization
790
+ # ============================================================================
791
+
792
+ def plot_similarity_heatmap(sim_df: pd.DataFrame, title: str, save_path: str):
793
+ """Plot and save similarity heatmap with fixed category order."""
794
+ plt.figure(figsize=(10, 8))
795
+
796
+ available_order = [c for c in CATEGORY_ORDER if c in sim_df.index]
797
+ sim_df_ordered = sim_df.loc[available_order, available_order]
798
+
799
+ sns.heatmap(
800
+ sim_df_ordered,
801
+ annot=True,
802
+ fmt='.4f',
803
+ cmap='RdYlBu_r',
804
+ center=0.5,
805
+ vmin=0,
806
+ vmax=1,
807
+ square=True,
808
+ linewidths=0.5,
809
+ cbar_kws={'label': 'Cosine Similarity'}
810
+ )
811
+
812
+ plt.title(title, fontsize=14, fontweight='bold')
813
+ plt.tight_layout()
814
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
815
+ plt.close()
816
+ logger.info(f"Saved heatmap: {save_path}")
817
+
818
+
819
+ def plot_comparison(results_list: List[dict], save_path: str):
820
+ """Plot comparison of similarity pairs across models."""
821
+ pairs = ['sim_above_far', 'sim_under_close', 'sim_left_right']
822
+ pair_labels = ['above-far', 'under-close', 'left-right']
823
+
824
+ fig, ax = plt.subplots(figsize=(12, 6))
825
+
826
+ x = np.arange(len(pairs))
827
+ width = 0.8 / len(results_list)
828
+
829
+ for i, result in enumerate(results_list):
830
+ model = result['model']
831
+ values = [result.get(p, 0) or 0 for p in pairs]
832
+ offset = (i - len(results_list) / 2 + 0.5) * width
833
+ bars = ax.bar(x + offset, values, width, label=model)
834
+
835
+ for bar, val in zip(bars, values):
836
+ if val:
837
+ ax.annotate(
838
+ f'{val:.3f}',
839
+ xy=(bar.get_x() + bar.get_width() / 2, bar.get_height()),
840
+ xytext=(0, 3),
841
+ textcoords='offset points',
842
+ ha='center',
843
+ va='bottom',
844
+ fontsize=8
845
+ )
846
+
847
+ ax.set_ylabel('Cosine Similarity')
848
+ ax.set_title('Spatial Concept Similarity Comparison (Modified Format)\n(Hypothesis 4: above-far & under-close should be > left-right for vanilla)')
849
+ ax.set_xticks(x)
850
+ ax.set_xticklabels(pair_labels)
851
+ ax.legend(loc='upper right', fontsize=8)
852
+ ax.set_ylim(0, 1)
853
+ ax.axhline(y=0.5, color='gray', linestyle='--', alpha=0.5)
854
+
855
+ plt.tight_layout()
856
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
857
+ plt.close()
858
+ logger.info(f"Saved comparison plot: {save_path}")
859
+
860
+
861
+ def _extract_pair_trajectory(
862
+ all_layer_sims: Dict[int, pd.DataFrame],
863
+ cat1: str, cat2: str,
864
+ ) -> Tuple[List[int], List[float]]:
865
+ """Extract similarity values for a pair across all layers."""
866
+ layers = sorted(all_layer_sims.keys())
867
+ valid_layers = []
868
+ values = []
869
+ for l in layers:
870
+ df = all_layer_sims[l]
871
+ if cat1 in df.index and cat2 in df.columns:
872
+ valid_layers.append(l)
873
+ values.append(df.loc[cat1, cat2])
874
+ return valid_layers, values
875
+
876
+
877
+ def get_representative_layers(all_layers: List[int], n: int = 5) -> List[int]:
878
+ """Pick n representative layers (evenly spaced) for heatmap output."""
879
+ if len(all_layers) <= n:
880
+ return list(all_layers)
881
+ indices = np.linspace(0, len(all_layers) - 1, n, dtype=int)
882
+ return [all_layers[i] for i in indices]
883
+
884
+
885
+ def plot_similarity_trajectories(
886
+ all_layer_sims: Dict[int, pd.DataFrame],
887
+ title: str,
888
+ save_path: str,
889
+ ):
890
+ """Plot similarity of key category pairs across all layers.
891
+
892
+ Left panel: absolute cosine similarity per pair across layers.
893
+ Right panel: difference from left-right baseline (positive = more similar than L-R).
894
+ """
895
+ fig, axes = plt.subplots(1, 2, figsize=(20, 7))
896
+
897
+ # --- Left panel: absolute similarity ---
898
+ ax = axes[0]
899
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['hypothesis']:
900
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
901
+ ax.plot(layers, vals, '-', color=color, label=label, linewidth=2.5, markersize=0)
902
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['within_axis']:
903
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
904
+ ax.plot(layers, vals, '--', color=color, label=label, linewidth=1.8, markersize=0)
905
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['counter_hypothesis']:
906
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
907
+ ax.plot(layers, vals, ':', color=color, label=label, linewidth=1.5, alpha=0.8)
908
+
909
+ ax.set_xlabel('Layer Index', fontsize=12)
910
+ ax.set_ylabel('Cosine Similarity', fontsize=12)
911
+ ax.set_title(f'{title}\nPairwise Similarity Across Layers', fontsize=13)
912
+ ax.legend(fontsize=9, loc='best')
913
+ ax.grid(True, alpha=0.3)
914
+
915
+ # --- Right panel: difference from left-right ---
916
+ ax = axes[1]
917
+ lr_layers, lr_vals = _extract_pair_trajectory(all_layer_sims, 'left', 'right')
918
+ lr_dict = dict(zip(lr_layers, lr_vals))
919
+
920
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['hypothesis']:
921
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
922
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
923
+ ax.plot(layers, diffs, '-', color=color, label=f'{label} - left-right',
924
+ linewidth=2.5, markersize=0)
925
+
926
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['counter_hypothesis']:
927
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
928
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
929
+ ax.plot(layers, diffs, ':', color=color, label=f'{label} - left-right',
930
+ linewidth=1.5, alpha=0.8)
931
+
932
+ # Also show above-under and far-close as references
933
+ for cat1, cat2, label, color in TRAJECTORY_PAIRS['within_axis']:
934
+ if label == 'left-right':
935
+ continue
936
+ layers, vals = _extract_pair_trajectory(all_layer_sims, cat1, cat2)
937
+ diffs = [v - lr_dict.get(l, 0) for l, v in zip(layers, vals)]
938
+ ax.plot(layers, diffs, '--', color=color, label=f'{label} - left-right',
939
+ linewidth=1.5, alpha=0.7)
940
+
941
+ ax.axhline(y=0, color='gray', linestyle='-', linewidth=1, alpha=0.5)
942
+ ax.set_xlabel('Layer Index', fontsize=12)
943
+ ax.set_ylabel('Similarity Difference (pair - left-right)', fontsize=12)
944
+ ax.set_title(f'{title}\nRelative to Left-Right Baseline', fontsize=13)
945
+ ax.legend(fontsize=8, loc='best')
946
+ ax.grid(True, alpha=0.3)
947
+
948
+ plt.tight_layout()
949
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
950
+ plt.close()
951
+ logger.info(f"Saved trajectory plot: {save_path}")
952
+
953
+
954
+ def plot_cross_scale_trajectories(
955
+ cross_scale_data: Dict[str, Dict[int, pd.DataFrame]],
956
+ model_type: str,
957
+ save_path: str,
958
+ ):
959
+ """Compare layer-wise trajectories across training scales.
960
+
961
+ 3 columns: above-far, under-close, left-right (control).
962
+ Each subplot shows one line per scale.
963
+ """
964
+ pairs = [
965
+ ('above', 'far', 'above-far (hypothesis)'),
966
+ ('under', 'close', 'under-close (hypothesis)'),
967
+ ('left', 'right', 'left-right (control)'),
968
+ ]
969
+
970
+ fig, axes = plt.subplots(1, len(pairs), figsize=(7 * len(pairs), 6))
971
+ if len(pairs) == 1:
972
+ axes = [axes]
973
+
974
+ for idx, (cat1, cat2, label) in enumerate(pairs):
975
+ ax = axes[idx]
976
+ for scale in ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']:
977
+ if scale not in cross_scale_data:
978
+ continue
979
+ layer_sims = cross_scale_data[scale]
980
+ layers, vals = _extract_pair_trajectory(layer_sims, cat1, cat2)
981
+ color = SCALE_COLORS.get(scale, 'gray')
982
+ ax.plot(layers, vals, '-', color=color, label=scale, linewidth=2, markersize=0)
983
+
984
+ ax.set_xlabel('Layer Index', fontsize=12)
985
+ ax.set_ylabel('Cosine Similarity', fontsize=12)
986
+ ax.set_title(label, fontsize=13, fontweight='bold')
987
+ ax.legend(fontsize=10)
988
+ ax.grid(True, alpha=0.3)
989
+
990
+ fig.suptitle(
991
+ f'{model_type.upper()} - Similarity Trajectory Across Scales',
992
+ fontsize=15, fontweight='bold', y=1.02
993
+ )
994
+ plt.tight_layout()
995
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
996
+ plt.close()
997
+ logger.info(f"Saved cross-scale trajectory: {save_path}")
998
+
999
+
1000
+ def plot_similarity_evolution_heatmap(
1001
+ cross_scale_data: Dict[str, Dict[int, pd.DataFrame]],
1002
+ model_type: str,
1003
+ save_path: str,
1004
+ ):
1005
+ """2D heatmap: x=layer, y=scale, color=similarity for each hypothesis pair.
1006
+
1007
+ Gives a bird's-eye view of how both network depth and training data scale
1008
+ affect the similarity between hypothesis-relevant category pairs.
1009
+ """
1010
+ pairs = [
1011
+ ('above', 'far', 'above-far'),
1012
+ ('under', 'close', 'under-close'),
1013
+ ('left', 'right', 'left-right'),
1014
+ ('above', 'under', 'above-under'),
1015
+ ('far', 'close', 'far-close'),
1016
+ ]
1017
+ scale_order = ['vanilla', '80k', '400k', '800k', '2m', 'roborefer']
1018
+ available_scales = [s for s in scale_order if s in cross_scale_data]
1019
+
1020
+ # Determine layer range from first available scale
1021
+ first_scale = available_scales[0]
1022
+ all_layers = sorted(cross_scale_data[first_scale].keys())
1023
+
1024
+ fig, axes = plt.subplots(len(pairs), 1, figsize=(max(14, len(all_layers) * 0.5), 3 * len(pairs)))
1025
+ if len(pairs) == 1:
1026
+ axes = [axes]
1027
+
1028
+ for idx, (cat1, cat2, label) in enumerate(pairs):
1029
+ ax = axes[idx]
1030
+ # Build matrix: rows=scales, cols=layers
1031
+ matrix = np.full((len(available_scales), len(all_layers)), np.nan)
1032
+ for si, scale in enumerate(available_scales):
1033
+ layer_sims = cross_scale_data[scale]
1034
+ for li, layer in enumerate(all_layers):
1035
+ if layer in layer_sims:
1036
+ df = layer_sims[layer]
1037
+ if cat1 in df.index and cat2 in df.columns:
1038
+ matrix[si, li] = df.loc[cat1, cat2]
1039
+
1040
+ im = ax.imshow(matrix, aspect='auto', cmap='RdYlBu_r', vmin=0.5, vmax=1.0)
1041
+ ax.set_yticks(range(len(available_scales)))
1042
+ ax.set_yticklabels(available_scales, fontsize=10)
1043
+
1044
+ # X-axis: show every Nth layer label to avoid crowding
1045
+ step = max(1, len(all_layers) // 15)
1046
+ ax.set_xticks(range(0, len(all_layers), step))
1047
+ ax.set_xticklabels([str(all_layers[i]) for i in range(0, len(all_layers), step)], fontsize=8)
1048
+
1049
+ ax.set_title(label, fontsize=12, fontweight='bold')
1050
+ ax.set_xlabel('Layer Index', fontsize=10)
1051
+ fig.colorbar(im, ax=ax, label='Cosine Similarity', shrink=0.8)
1052
+
1053
+ fig.suptitle(
1054
+ f'{model_type.upper()} - Similarity Evolution (Layer x Scale)',
1055
+ fontsize=15, fontweight='bold', y=1.01
1056
+ )
1057
+ plt.tight_layout()
1058
+ plt.savefig(save_path, dpi=300, bbox_inches='tight')
1059
+ plt.close()
1060
+ logger.info(f"Saved evolution heatmap: {save_path}")
1061
+
1062
+
1063
+ # ============================================================================
1064
+ # Model Configurations
1065
+ # ============================================================================
1066
+
1067
+ MODEL_CONFIGS = {
1068
+ 'molmo': {
1069
+ 'vanilla': 'allenai/Molmo-7B-O-0924',
1070
+ '80k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_80k/unshared',
1071
+ '400k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_400k/unshared',
1072
+ '800k': '/data/shared/Qwen/molmo/outputs/data_scale_exp_800k/unshared',
1073
+ '2m': '/data/shared/Qwen/molmo/outputs/data_scale_exp_2m/unshared',
1074
+ },
1075
+ 'nvila': {
1076
+ 'vanilla': '/data/shared/Qwen/mydisk/NVILA-Lite-2B',
1077
+ '80k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_80K-20251108_180221',
1078
+ '400k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_400K-20251108_180221',
1079
+ '800k': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_800K-20251108_180221',
1080
+ '2m': '/data/shared/Qwen/mydisk/output/DATA/NVILA-Lite-2B-DATA_SCALE_EXP_2M-20260205_003632',
1081
+ 'roborefer': '/data/shared/Qwen/mydisk/RoboRefer_model',
1082
+ },
1083
+ 'qwen': {
1084
+ 'vanilla': 'Qwen/Qwen2.5-VL-3B-Instruct',
1085
+ '80k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_80k-20251114_120221',
1086
+ '400k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_400k-20251114_120221',
1087
+ '800k': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_800k-20251114_120221',
1088
+ '2m': '/data/shared/Qwen/mydisk/output/Qwen/Qwen2.5-VL-3B-Instruct-data_scale_exp_2m-20260109_120517',
1089
+ },
1090
+ }
1091
+
1092
+
1093
+ # ============================================================================
1094
+ # Main
1095
+ # ============================================================================
1096
+
1097
+ def main():
1098
+ parser = argparse.ArgumentParser(description='Experiment 2-A (Modified): Embedding Space Analysis')
1099
+ parser.add_argument('--data_path', type=str,
1100
+ default='/data/shared/Qwen/EmbSpatial-Bench/EmbSpatial-Bench.tsv')
1101
+ parser.add_argument('--model_type', type=str, required=True,
1102
+ choices=['molmo', 'nvila', 'qwen'])
1103
+ parser.add_argument('--scales', type=str, nargs='+',
1104
+ default=['vanilla', '80k', '400k', '800k', '2m'])
1105
+ parser.add_argument('--output_dir', type=str,
1106
+ default='/data/shared/Qwen/experiments/exp2a_modified/results_all_layers')
1107
+ parser.add_argument('--samples_per_category', type=int, default=200)
1108
+ parser.add_argument('--device', type=str, default='cuda')
1109
+ parser.add_argument('--seed', type=int, default=42)
1110
+
1111
+ args = parser.parse_args()
1112
+
1113
+ # Auto-include roborefer for nvila if not already specified
1114
+ if args.model_type == 'nvila' and 'roborefer' not in args.scales:
1115
+ args.scales.append('roborefer')
1116
+
1117
+ # Set random seed
1118
+ np.random.seed(args.seed)
1119
+ torch.manual_seed(args.seed)
1120
+ random.seed(args.seed)
1121
+
1122
+ # Create output directory
1123
+ output_dir = os.path.join(args.output_dir, args.model_type)
1124
+ os.makedirs(output_dir, exist_ok=True)
1125
+
1126
+ # Load and modify data
1127
+ logger.info("\n=== Loading & Modifying EmbSpatialBench Data ===")
1128
+ data = load_and_modify_data(args.data_path, args.samples_per_category, args.seed)
1129
+
1130
+ results_list = []
1131
+ cross_scale_data = {} # scale -> {layer_idx -> sim_df}
1132
+ model_configs = MODEL_CONFIGS[args.model_type]
1133
+
1134
+ for scale in args.scales:
1135
+ if scale not in model_configs:
1136
+ logger.warning(f"Scale {scale} not available for {args.model_type}, skipping...")
1137
+ continue
1138
+
1139
+ model_path = model_configs[scale]
1140
+
1141
+ if not os.path.exists(model_path) and not model_path.startswith('Qwen/') and not model_path.startswith('allenai/'):
1142
+ logger.warning(f"Model path not found: {model_path}, skipping...")
1143
+ continue
1144
+
1145
+ logger.info(f"\n=== Processing {args.model_type} - {scale} ===")
1146
+ logger.info(f"Model path: {model_path}")
1147
+
1148
+ try:
1149
+ extractor = get_extractor(
1150
+ args.model_type,
1151
+ model_path,
1152
+ scale=scale,
1153
+ device=args.device,
1154
+ )
1155
+
1156
+ num_layers = len(extractor.target_layers)
1157
+
1158
+ # Extract representations for ALL layers in one pass
1159
+ all_layer_reps = extract_all_layer_representations(extractor, data)
1160
+
1161
+ # Compute similarity matrices for all layers
1162
+ scale_sims = {}
1163
+ model_name = f"{args.model_type}_{scale}"
1164
+ for layer_idx in sorted(all_layer_reps.keys()):
1165
+ sim_df = compute_similarity_matrix(all_layer_reps[layer_idx])
1166
+ scale_sims[layer_idx] = sim_df
1167
+
1168
+ results = analyze_hypothesis(sim_df, model_name)
1169
+ results['layer_idx'] = layer_idx
1170
+ results_list.append(results)
1171
+
1172
+ # Save CSV for every layer
1173
+ sim_df.to_csv(os.path.join(output_dir, f'similarity_{scale}_L{layer_idx}.csv'))
1174
+
1175
+ cross_scale_data[scale] = scale_sims
1176
+ logger.info(f" Computed similarity matrices for {len(scale_sims)} layers")
1177
+
1178
+ # Save heatmaps for representative layers only (to avoid hundreds of files)
1179
+ rep_layers = get_representative_layers(sorted(scale_sims.keys()))
1180
+ logger.info(f" Saving heatmaps for representative layers: {rep_layers}")
1181
+ for layer_idx in rep_layers:
1182
+ sim_df = scale_sims[layer_idx]
1183
+ plot_similarity_heatmap(
1184
+ sim_df,
1185
+ f'{args.model_type.upper()} ({scale}) - Layer {layer_idx}/{num_layers-1}',
1186
+ os.path.join(output_dir, f'heatmap_{scale}_L{layer_idx}.png')
1187
+ )
1188
+
1189
+ # Per-scale trajectory plot
1190
+ plot_similarity_trajectories(
1191
+ scale_sims,
1192
+ f'{args.model_type.upper()} ({scale})',
1193
+ os.path.join(output_dir, f'trajectory_{scale}.png')
1194
+ )
1195
+
1196
+ extractor.cleanup()
1197
+
1198
+ except Exception as e:
1199
+ logger.error(f"Failed to process {args.model_type} - {scale}: {e}")
1200
+ import traceback
1201
+ traceback.print_exc()
1202
+ continue
1203
+
1204
+ # Cross-scale comparison plots
1205
+ if len(cross_scale_data) > 1:
1206
+ plot_cross_scale_trajectories(
1207
+ cross_scale_data,
1208
+ args.model_type,
1209
+ os.path.join(output_dir, 'trajectory_cross_scale.png')
1210
+ )
1211
+ plot_similarity_evolution_heatmap(
1212
+ cross_scale_data,
1213
+ args.model_type,
1214
+ os.path.join(output_dir, 'evolution_heatmap.png')
1215
+ )
1216
+
1217
+ # Save results summary
1218
+ if results_list:
1219
+ results_df = pd.DataFrame(results_list)
1220
+ results_df.to_csv(os.path.join(output_dir, 'results_summary.csv'), index=False)
1221
+
1222
+ logger.info("\n=== Analysis Complete ===")
1223
+ logger.info(f"Results saved to: {output_dir}")
1224
+ logger.info(f"Total: {len(results_list)} (layer, scale) combinations across {len(cross_scale_data)} scales")
1225
+
1226
+
1227
+ if __name__ == '__main__':
1228
+ main()
exp2a_modified/results/molmo/results_summary.csv ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model,sim_above_far,sim_under_close,sim_left_right,diff_above_far_vs_left_right,diff_under_close_vs_left_right,layer_idx,layer_label
2
+ molmo_vanilla,0.93186307,0.9325508,0.9999072,-0.068044126,-0.06735641,6,early
3
+ molmo_vanilla,0.9252183,0.925783,0.9996471,-0.0744288,-0.0738641,13,early_mid
4
+ molmo_vanilla,0.8514263,0.85130924,0.9945253,-0.14309901,-0.14321607,19,middle
5
+ molmo_vanilla,0.7811126,0.7902819,0.9955554,-0.21444279,-0.20527351,26,late_mid
6
+ molmo_vanilla,0.82378054,0.8320327,0.9968723,-0.17309177,-0.16483963,31,late
7
+ molmo_80k,0.94482744,0.9447468,0.9999342,-0.05510676,-0.055187404,6,early
8
+ molmo_80k,0.9501332,0.9501227,0.99982655,-0.049693346,-0.049703836,13,early_mid
9
+ molmo_80k,0.8622559,0.86525977,0.9953824,-0.13312656,-0.13012266,19,middle
10
+ molmo_80k,0.7678993,0.780402,0.99710876,-0.22920948,-0.21670675,26,late_mid
11
+ molmo_80k,0.8963089,0.9020278,0.99889964,-0.10259074,-0.09687185,31,late
12
+ molmo_400k,0.94099295,0.9413343,0.9999467,-0.058953762,-0.058612406,6,early
13
+ molmo_400k,0.93268144,0.93169504,0.9983739,-0.065692484,-0.06667888,13,early_mid
14
+ molmo_400k,0.8004133,0.7915684,0.9835917,-0.18317837,-0.19202328,19,middle
15
+ molmo_400k,0.73278224,0.7314169,0.98859596,-0.25581372,-0.25717908,26,late_mid
16
+ molmo_400k,0.9089592,0.911077,0.99682474,-0.08786553,-0.08574772,31,late
17
+ molmo_800k,0.9501749,0.95063716,0.9999551,-0.04978019,-0.049317956,6,early
18
+ molmo_800k,0.92944044,0.92717594,0.9990981,-0.06965768,-0.07192218,13,early_mid
19
+ molmo_800k,0.7842552,0.7732489,0.9752356,-0.19098037,-0.20198667,19,middle
20
+ molmo_800k,0.7602978,0.7757774,0.9868044,-0.22650665,-0.21102703,26,late_mid
21
+ molmo_800k,0.9205744,0.9238774,0.99709034,-0.07651591,-0.07321292,31,late
22
+ molmo_2m,0.95355743,0.9536563,0.99995154,-0.04639411,-0.046295226,6,early
23
+ molmo_2m,0.9074487,0.9029928,0.999149,-0.091700315,-0.09615624,13,early_mid
24
+ molmo_2m,0.74899715,0.7498276,0.9528682,-0.20387107,-0.2030406,19,middle
25
+ molmo_2m,0.72931236,0.751271,0.9772682,-0.24795586,-0.22599721,26,late_mid
26
+ molmo_2m,0.9040614,0.9161786,0.99538875,-0.09132737,-0.07921016,31,late
exp2a_modified/results/molmo/similarity_2m_L19_middle.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999994,0.9528682,0.8079404,0.7898549,0.75441873,0.74726176
3
+ right,0.9528682,1.0,0.79594153,0.79201853,0.74864805,0.74139905
4
+ above,0.8079404,0.79594153,1.0,0.86362475,0.74899715,0.72680587
5
+ under,0.7898549,0.79201853,0.86362475,0.9999998,0.73787785,0.7498276
6
+ far,0.75441873,0.74864805,0.74899715,0.73787785,1.0000002,0.99016166
7
+ close,0.74726176,0.74139905,0.72680587,0.7498276,0.99016166,0.99999976
exp2a_modified/results/molmo/similarity_2m_L26_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000001,0.9772682,0.82484055,0.81500334,0.7462688,0.73856544
3
+ right,0.9772682,1.0,0.81542575,0.81403667,0.73317695,0.7264032
4
+ above,0.82484055,0.81542575,1.0,0.915252,0.72931236,0.7135039
5
+ under,0.81500334,0.81403667,0.915252,1.0000001,0.74381506,0.751271
6
+ far,0.7462688,0.73317695,0.72931236,0.74381506,0.9999998,0.9895668
7
+ close,0.73856544,0.7264032,0.7135039,0.751271,0.9895668,1.0000005
exp2a_modified/results/molmo/similarity_2m_L31_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999976,0.99538875,0.94726926,0.945658,0.9218228,0.9192073
3
+ right,0.99538875,0.9999996,0.94363815,0.9437132,0.9156522,0.91351354
4
+ above,0.94726926,0.94363815,1.0,0.9741205,0.9040614,0.8990477
5
+ under,0.945658,0.9437132,0.9741205,0.9999998,0.91565245,0.9161786
6
+ far,0.9218228,0.9156522,0.9040614,0.91565245,0.999999,0.9976242
7
+ close,0.9192073,0.91351354,0.8990477,0.9161786,0.9976242,1.0000002
exp2a_modified/results/molmo/similarity_2m_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000001,0.99995154,0.9897138,0.9891182,0.95365894,0.9537702
3
+ right,0.99995154,1.0000001,0.98972285,0.9891555,0.95393157,0.9540078
4
+ above,0.9897138,0.98972285,0.9999999,0.99978626,0.95355743,0.9535313
5
+ under,0.9891182,0.9891555,0.99978626,1.0000004,0.95373374,0.9536563
6
+ far,0.95365894,0.95393157,0.95355743,0.95373374,0.9999998,0.9998942
7
+ close,0.9537702,0.9540078,0.9535313,0.9536563,0.9998942,0.9999999
exp2a_modified/results/molmo/similarity_400k_L13_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000005,0.9983739,0.9708368,0.97001463,0.92518365,0.92525065
3
+ right,0.9983739,1.0000001,0.9714834,0.9709337,0.9258892,0.9259387
4
+ above,0.9708368,0.9714834,1.0000001,0.9966369,0.93268144,0.93088496
5
+ under,0.97001463,0.9709337,0.9966369,1.0000001,0.931912,0.93169504
6
+ far,0.92518365,0.9258892,0.93268144,0.931912,1.0,0.9991321
7
+ close,0.92525065,0.9259387,0.93088496,0.93169504,0.9991321,1.0000002
exp2a_modified/results/molmo/similarity_400k_L19_middle.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000001,0.9835917,0.89433604,0.8799748,0.8249269,0.82165074
3
+ right,0.9835917,1.0,0.89446956,0.8852003,0.82732373,0.82341003
4
+ above,0.89433604,0.89446956,1.0,0.9350607,0.8004133,0.78341514
5
+ under,0.8799748,0.8852003,0.9350607,1.0000004,0.7830846,0.7915684
6
+ far,0.8249269,0.82732373,0.8004133,0.7830846,1.0000001,0.9916222
7
+ close,0.82165074,0.82341003,0.78341514,0.7915684,0.9916222,0.9999999
exp2a_modified/results/molmo/similarity_400k_L26_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000002,0.98859596,0.87158585,0.858461,0.78897005,0.7835145
3
+ right,0.98859596,0.99999994,0.8694842,0.8613639,0.78442615,0.779482
4
+ above,0.87158585,0.8694842,1.0000007,0.9409423,0.73278224,0.7150828
5
+ under,0.858461,0.8613639,0.9409423,0.9999998,0.7253824,0.7314169
6
+ far,0.78897005,0.78442615,0.73278224,0.7253824,0.9999997,0.9895003
7
+ close,0.7835145,0.779482,0.7150828,0.7314169,0.9895003,0.9999997
exp2a_modified/results/molmo/similarity_400k_L31_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000001,0.99682474,0.95443934,0.95085055,0.92124707,0.9183261
3
+ right,0.99682474,1.0000005,0.9529462,0.95104045,0.91831386,0.91579354
4
+ above,0.95443934,0.9529462,0.99999976,0.9797501,0.9089592,0.90330064
5
+ under,0.95085055,0.95104045,0.9797501,1.0000005,0.910488,0.911077
6
+ far,0.92124707,0.91831386,0.9089592,0.910488,1.0000002,0.99741966
7
+ close,0.9183261,0.91579354,0.90330064,0.911077,0.99741966,1.0000001
exp2a_modified/results/molmo/similarity_400k_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999999,0.9999467,0.98677695,0.98590326,0.9344799,0.93451536
3
+ right,0.9999467,0.9999999,0.9866656,0.98579997,0.9344424,0.9344614
4
+ above,0.98677695,0.9866656,1.0000002,0.9997301,0.94099295,0.94090044
5
+ under,0.98590326,0.98579997,0.9997301,1.0,0.94144833,0.9413343
6
+ far,0.9344799,0.9344424,0.94099295,0.94144833,0.9999999,0.9999009
7
+ close,0.93451536,0.9344614,0.94090044,0.9413343,0.9999009,1.0000001
exp2a_modified/results/molmo/similarity_800k_L13_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000002,0.9990981,0.970269,0.96832395,0.9112702,0.91077095
3
+ right,0.9990981,1.0000005,0.97084796,0.96887577,0.9112278,0.9107026
4
+ above,0.970269,0.97084796,1.0,0.9983258,0.92944044,0.9281176
5
+ under,0.96832395,0.96887577,0.9983258,1.0000006,0.9282915,0.92717594
6
+ far,0.9112702,0.9112278,0.92944044,0.9282915,0.9999999,0.9996043
7
+ close,0.91077095,0.9107026,0.9281176,0.92717594,0.9996043,0.99999946
exp2a_modified/results/molmo/similarity_800k_L26_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999996,0.9868044,0.8516903,0.8428743,0.7809909,0.77917296
3
+ right,0.9868044,0.9999997,0.84324884,0.84005576,0.7758011,0.7743711
4
+ above,0.8516903,0.84324884,1.0000004,0.94099367,0.7602978,0.74670935
5
+ under,0.8428743,0.84005576,0.94099367,0.9999995,0.76920235,0.7757774
6
+ far,0.7809909,0.7758011,0.7602978,0.76920235,0.9999997,0.9897728
7
+ close,0.77917296,0.7743711,0.74670935,0.7757774,0.9897728,0.9999995
exp2a_modified/results/molmo/similarity_800k_L31_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999946,0.99709034,0.95449007,0.95249826,0.9339559,0.9329122
3
+ right,0.99709034,0.9999998,0.9512191,0.95042264,0.9298728,0.9291853
4
+ above,0.95449007,0.9512191,1.0000004,0.9820097,0.9205744,0.9159472
5
+ under,0.95249826,0.95042264,0.9820097,1.0,0.9233836,0.9238774
6
+ far,0.9339559,0.9298728,0.9205744,0.9233836,1.0,0.9976558
7
+ close,0.9329122,0.9291853,0.9159472,0.9238774,0.9976558,0.99999976
exp2a_modified/results/molmo/similarity_800k_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999995,0.9999551,0.98933417,0.9887577,0.9477357,0.9479904
3
+ right,0.9999551,0.9999999,0.9892925,0.98874366,0.9477358,0.94796
4
+ above,0.98933417,0.9892925,0.9999998,0.999767,0.9501749,0.9503241
5
+ under,0.9887577,0.98874366,0.999767,0.99999964,0.95052344,0.95063716
6
+ far,0.9477357,0.9477358,0.9501749,0.95052344,0.9999996,0.9999156
7
+ close,0.9479904,0.94796,0.9503241,0.95063716,0.9999156,1.0000001
exp2a_modified/results/molmo/similarity_80k_L13_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000001,0.99982655,0.98557615,0.98462695,0.94595236,0.9460132
3
+ right,0.99982655,0.99999964,0.9854151,0.984502,0.94524086,0.9452381
4
+ above,0.98557615,0.9854151,0.99999964,0.9995325,0.9501332,0.9496666
5
+ under,0.98462695,0.984502,0.9995325,1.0,0.950608,0.9501227
6
+ far,0.94595236,0.94524086,0.9501332,0.950608,1.0000001,0.99974734
7
+ close,0.9460132,0.9452381,0.9496666,0.9501227,0.99974734,1.0000001
exp2a_modified/results/molmo/similarity_80k_L19_middle.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999999,0.9953824,0.951213,0.9497772,0.87746465,0.8757486
3
+ right,0.9953824,1.0000002,0.9533331,0.95202667,0.8783575,0.87553334
4
+ above,0.951213,0.9533331,0.9999997,0.9892175,0.8622559,0.8543509
5
+ under,0.9497772,0.95202667,0.9892175,1.0000002,0.86614037,0.86525977
6
+ far,0.87746465,0.8783575,0.8622559,0.86614037,1.0000001,0.9966103
7
+ close,0.8757486,0.87553334,0.8543509,0.86525977,0.9966103,1.0000001
exp2a_modified/results/molmo/similarity_80k_L26_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0,0.99710876,0.94322985,0.94265085,0.8083289,0.80842566
3
+ right,0.99710876,1.0,0.9450414,0.94532174,0.80541736,0.80532265
4
+ above,0.94322985,0.9450414,1.0000002,0.98973715,0.7678993,0.7628029
5
+ under,0.94265085,0.94532174,0.98973715,1.0000006,0.7791806,0.780402
6
+ far,0.8083289,0.80541736,0.7678993,0.7791806,1.0000001,0.9953803
7
+ close,0.80842566,0.80532265,0.7628029,0.780402,0.9953803,0.9999997
exp2a_modified/results/molmo/similarity_80k_L31_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999994,0.99889964,0.9706672,0.97065854,0.91552895,0.9142971
3
+ right,0.99889964,1.0000006,0.97100496,0.97146505,0.9128623,0.91171795
4
+ above,0.9706672,0.97100496,1.0000001,0.99551195,0.8963089,0.89303714
5
+ under,0.97065854,0.97146505,0.99551195,1.0,0.9027907,0.9020278
6
+ far,0.91552895,0.9128623,0.8963089,0.9027907,1.0,0.99814963
7
+ close,0.9142971,0.91171795,0.89303714,0.9020278,0.99814963,1.0
exp2a_modified/results/molmo/similarity_80k_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000005,0.9999342,0.9846145,0.9833463,0.9410094,0.9415399
3
+ right,0.9999342,1.0000002,0.9844082,0.9831588,0.9409639,0.9414707
4
+ above,0.9846145,0.9844082,0.9999996,0.99965036,0.94482744,0.9451473
5
+ under,0.9833463,0.9831588,0.99965036,1.0000004,0.94445574,0.9447468
6
+ far,0.9410094,0.9409639,0.94482744,0.94445574,1.0000002,0.9998886
7
+ close,0.9415399,0.9414707,0.9451473,0.9447468,0.9998886,1.0000001
exp2a_modified/results/molmo/similarity_vanilla_L13_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999976,0.9996471,0.98403525,0.98292845,0.91606426,0.9160556
3
+ right,0.9996471,0.99999994,0.98429143,0.983286,0.9153532,0.9152582
4
+ above,0.98403525,0.98429143,1.0000002,0.9989633,0.9252183,0.9246333
5
+ under,0.98292845,0.983286,0.9989633,1.0,0.9264116,0.925783
6
+ far,0.91606426,0.9153532,0.9252183,0.9264116,0.9999999,0.99945354
7
+ close,0.9160556,0.9152582,0.9246333,0.925783,0.99945354,0.99999976
exp2a_modified/results/molmo/similarity_vanilla_L19_middle.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999976,0.9945253,0.96206295,0.9596552,0.8591312,0.85741884
3
+ right,0.9945253,0.9999999,0.9645537,0.96271646,0.85745674,0.8543235
4
+ above,0.96206295,0.9645537,1.0,0.9921303,0.8514263,0.8453121
5
+ under,0.9596552,0.96271646,0.9921303,1.0000005,0.8540211,0.85130924
6
+ far,0.8591312,0.85745674,0.8514263,0.8540211,0.99999976,0.9961321
7
+ close,0.85741884,0.8543235,0.8453121,0.85130924,0.9961321,0.99999976
exp2a_modified/results/molmo/similarity_vanilla_L31_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000004,0.9968723,0.96832633,0.96827185,0.8455303,0.84230846
3
+ right,0.9968723,0.99999964,0.97106063,0.9713094,0.8435764,0.83977795
4
+ above,0.96832633,0.97106063,0.9999999,0.9944878,0.82378054,0.8183431
5
+ under,0.96827185,0.9713094,0.9944878,1.0000004,0.8355485,0.8320327
6
+ far,0.8455303,0.8435764,0.82378054,0.8355485,1.0000001,0.9970446
7
+ close,0.84230846,0.83977795,0.8183431,0.8320327,0.9970446,0.99999976
exp2a_modified/results/molmo/similarity_vanilla_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000005,0.9999072,0.9843363,0.98275346,0.9225271,0.92304546
3
+ right,0.9999072,0.99999976,0.98413754,0.9825534,0.9222437,0.9227377
4
+ above,0.9843363,0.98413754,0.99999976,0.99941427,0.93186307,0.9320967
5
+ under,0.98275346,0.9825534,0.99941427,1.0000001,0.932338,0.9325508
6
+ far,0.9225271,0.9222437,0.93186307,0.932338,1.0000005,0.9998285
7
+ close,0.92304546,0.9227377,0.9320967,0.9325508,0.9998285,1.0000001
exp2a_modified/results/nvila/similarity_2m_L11_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000004,0.99997437,0.99205077,0.9921235,0.9665318,0.96630245
3
+ right,0.99997437,1.0000001,0.9920338,0.99214566,0.966442,0.9661902
4
+ above,0.99205077,0.9920338,1.0000002,0.99985087,0.97557366,0.97540605
5
+ under,0.9921235,0.99214566,0.99985087,0.99999964,0.97505516,0.9748245
6
+ far,0.9665318,0.966442,0.97557366,0.97505516,0.9999999,0.99989897
7
+ close,0.96630245,0.9661902,0.97540605,0.9748245,0.99989897,1.0000004
exp2a_modified/results/nvila/similarity_2m_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0,0.9999543,0.99784696,0.9978435,0.98729,0.98708475
3
+ right,0.9999543,0.99999934,0.9977713,0.9978466,0.9870302,0.98678756
4
+ above,0.99784696,0.9977713,1.0000005,0.99984324,0.9882744,0.9881075
5
+ under,0.9978435,0.9978466,0.99984324,0.9999999,0.987956,0.9877119
6
+ far,0.98729,0.9870302,0.9882744,0.987956,0.9999998,0.99992377
7
+ close,0.98708475,0.98678756,0.9881075,0.9877119,0.99992377,0.99999994
exp2a_modified/results/nvila/similarity_400k_L22_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000001,0.98938745,0.94574475,0.9383662,0.84551483,0.8435887
3
+ right,0.98938745,1.0000005,0.944611,0.942079,0.8478445,0.84517944
4
+ above,0.94574475,0.944611,0.9999996,0.98561645,0.8716479,0.86627376
5
+ under,0.9383662,0.942079,0.98561645,0.99999964,0.8625552,0.8639274
6
+ far,0.84551483,0.8478445,0.8716479,0.8625552,0.9999995,0.9961122
7
+ close,0.8435887,0.84517944,0.86627376,0.8639274,0.9961122,1.0
exp2a_modified/results/nvila/similarity_800k_L27_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999998,0.9998052,0.99807984,0.9981629,0.9949574,0.9949126
3
+ right,0.9998052,0.99999976,0.99777746,0.9980407,0.9950078,0.9949836
4
+ above,0.99807984,0.99777746,0.9999996,0.9994222,0.9953831,0.995184
5
+ under,0.9981629,0.9980407,0.9994222,1.0000001,0.99535114,0.9954171
6
+ far,0.9949574,0.9950078,0.9953831,0.99535114,1.0000008,0.9998271
7
+ close,0.9949126,0.9949836,0.995184,0.9954171,0.9998271,0.99999976
exp2a_modified/results/nvila/similarity_80k_L11_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999964,0.99997884,0.98755705,0.9873893,0.95973134,0.9595373
3
+ right,0.99997884,1.0,0.9876951,0.9875475,0.959848,0.9596424
4
+ above,0.98755705,0.9876951,1.0,0.99975145,0.971831,0.97180074
5
+ under,0.9873893,0.9875475,0.99975145,1.0,0.972558,0.972438
6
+ far,0.95973134,0.959848,0.971831,0.972558,0.9999996,0.9999212
7
+ close,0.9595373,0.9596424,0.97180074,0.972438,0.9999212,1.0000001
exp2a_modified/results/nvila/similarity_80k_L17_middle.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0,0.9781369,0.9310158,0.9325685,0.8561556,0.8558693
3
+ right,0.9781369,1.0000004,0.9284675,0.94139105,0.86132264,0.86142904
4
+ above,0.9310158,0.9284675,0.9999999,0.98366034,0.8978678,0.8936629
5
+ under,0.9325685,0.94139105,0.98366034,0.9999999,0.8987944,0.8984088
6
+ far,0.8561556,0.86132264,0.8978678,0.8987944,1.0,0.9990812
7
+ close,0.8558693,0.86142904,0.8936629,0.8984088,0.9990812,0.99999976
exp2a_modified/results/nvila/similarity_80k_L22_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000002,0.98724145,0.92874,0.9283841,0.850011,0.8500076
3
+ right,0.98724145,0.9999997,0.9257482,0.9328269,0.84493864,0.8449371
4
+ above,0.92874,0.9257482,0.9999999,0.9870798,0.8770188,0.8740602
5
+ under,0.9283841,0.9328269,0.9870798,1.0,0.8692532,0.8702637
6
+ far,0.850011,0.84493864,0.8770188,0.8692532,1.0,0.99855036
7
+ close,0.8500076,0.8449371,0.8740602,0.8702637,0.99855036,1.0000004
exp2a_modified/results/nvila/similarity_80k_L27_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000004,0.99955857,0.9954845,0.9956236,0.992975,0.9930063
3
+ right,0.99955857,0.99999994,0.9953584,0.99582684,0.9927686,0.9928009
4
+ above,0.9954845,0.9953584,1.0000001,0.99934226,0.9929427,0.99282837
5
+ under,0.9956236,0.99582684,0.99934226,1.0000004,0.99276465,0.99287665
6
+ far,0.992975,0.9927686,0.9929427,0.99276465,1.0,0.9998536
7
+ close,0.9930063,0.9928009,0.99282837,0.99287665,0.9998536,1.0
exp2a_modified/results/nvila/similarity_vanilla_L22_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999994,0.9937971,0.91717666,0.9179457,0.8878075,0.88723373
3
+ right,0.9937971,1.0000001,0.9206685,0.92282534,0.8912285,0.8903196
4
+ above,0.91717666,0.9206685,0.9999998,0.9956542,0.9163888,0.91370475
5
+ under,0.9179457,0.92282534,0.9956542,0.9999995,0.9187532,0.9182395
6
+ far,0.8878075,0.8912285,0.9163888,0.9187532,1.0000004,0.9987387
7
+ close,0.88723373,0.8903196,0.91370475,0.9182395,0.9987387,1.0000002
exp2a_modified/results/nvila/similarity_vanilla_L6_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000004,0.9997542,0.98725057,0.9877901,0.96586967,0.9646027
3
+ right,0.9997542,0.99999994,0.98683053,0.9876104,0.96541184,0.9639078
4
+ above,0.98725057,0.98683053,1.0000002,0.9994803,0.9768777,0.9762193
5
+ under,0.9877901,0.9876104,0.9994803,1.0,0.97659093,0.9755331
6
+ far,0.96586967,0.96541184,0.9768777,0.97659093,1.0000004,0.99978113
7
+ close,0.9646027,0.9639078,0.9762193,0.9755331,0.99978113,1.0000002
exp2a_modified/results/qwen/results_summary.csv ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ model,sim_above_far,sim_under_close,sim_left_right,diff_above_far_vs_left_right,diff_under_close_vs_left_right,layer_idx,layer_label
2
+ qwen_vanilla,0.9878544,0.9878074,0.9999392,-0.012084782,-0.01213181,7,early
3
+ qwen_vanilla,0.98418283,0.98290306,0.9998846,-0.01570177,-0.016981542,14,early_mid
4
+ qwen_vanilla,0.9776592,0.9756624,0.99965596,-0.021996737,-0.023993552,22,middle
5
+ qwen_vanilla,0.95032614,0.94788146,0.99512017,-0.044794023,-0.047238708,29,late_mid
6
+ qwen_vanilla,0.9415084,0.93939054,0.99848515,-0.056976736,-0.059094608,35,late
7
+ qwen_80k,0.9885977,0.98850304,0.99993646,-0.01133877,-0.011433423,7,early
8
+ qwen_80k,0.9823469,0.98100173,0.99989814,-0.017551243,-0.0188964,14,early_mid
9
+ qwen_80k,0.96985906,0.96774256,0.99973243,-0.029873371,-0.031989872,22,middle
10
+ qwen_80k,0.94964135,0.94838035,0.99680495,-0.047163606,-0.0484246,29,late_mid
11
+ qwen_80k,0.91188186,0.91212624,0.9987229,-0.08684105,-0.08659667,35,late
12
+ qwen_400k,0.9894593,0.9892013,0.99994236,-0.010483086,-0.010741055,7,early
13
+ qwen_400k,0.9844377,0.98320484,0.99993646,-0.015498757,-0.01673162,14,early_mid
14
+ qwen_400k,0.9699773,0.9682413,0.9997704,-0.029793084,-0.03152913,22,middle
15
+ qwen_400k,0.9580884,0.9558412,0.9983553,-0.04026693,-0.042514145,29,late_mid
16
+ qwen_400k,0.9148766,0.9173591,0.99830496,-0.08342838,-0.08094585,35,late
17
+ qwen_800k,0.9899683,0.9896326,0.99994457,-0.009976268,-0.010311961,7,early
18
+ qwen_800k,0.9868173,0.98572755,0.99994314,-0.013125837,-0.014215589,14,early_mid
19
+ qwen_800k,0.9739447,0.9729233,0.9997934,-0.025848687,-0.026870131,22,middle
20
+ qwen_800k,0.95486164,0.95309997,0.9981552,-0.043293536,-0.04505521,29,late_mid
21
+ qwen_800k,0.9358968,0.93043613,0.99775326,-0.06185645,-0.06731713,35,late
22
+ qwen_2m,0.9908798,0.9905167,0.9999402,-0.009060442,-0.009423494,7,early
23
+ qwen_2m,0.989565,0.98875475,0.9999511,-0.010386109,-0.011196375,14,early_mid
24
+ qwen_2m,0.9692019,0.9686675,0.9997675,-0.03056556,-0.031099975,22,middle
25
+ qwen_2m,0.93922085,0.93831193,0.9968462,-0.057625353,-0.058534265,29,late_mid
26
+ qwen_2m,0.9208069,0.9072825,0.9965475,-0.075740635,-0.08926505,35,late
exp2a_modified/results/qwen/similarity_2m_L14_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999998,0.9999511,0.99657696,0.996454,0.98376906,0.9835031
3
+ right,0.9999511,1.0,0.99654865,0.99649066,0.98354805,0.98321724
4
+ above,0.99657696,0.99654865,1.0000001,0.9999223,0.989565,0.9892365
5
+ under,0.996454,0.99649066,0.9999223,1.0000001,0.98912716,0.98875475
6
+ far,0.98376906,0.98354805,0.989565,0.98912716,1.0000005,0.9999435
7
+ close,0.9835031,0.98321724,0.9892365,0.98875475,0.9999435,1.0
exp2a_modified/results/qwen/similarity_2m_L22_middle.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999976,0.9997675,0.98960626,0.9895336,0.9607725,0.9608675
3
+ right,0.9997675,1.0,0.9901829,0.99017763,0.96098214,0.96100146
4
+ above,0.98960626,0.9901829,0.9999995,0.99960077,0.9692019,0.9689368
5
+ under,0.9895336,0.99017763,0.99960077,0.9999995,0.96889305,0.9686675
6
+ far,0.9607725,0.96098214,0.9692019,0.96889305,0.9999999,0.999784
7
+ close,0.9608675,0.96100146,0.9689368,0.9686675,0.999784,1.0
exp2a_modified/results/qwen/similarity_2m_L29_late_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.99999976,0.9968462,0.96972966,0.9656495,0.9295452,0.9282496
3
+ right,0.9968462,1.0000004,0.9678302,0.96753937,0.9298134,0.92841697
4
+ above,0.96972966,0.9678302,0.99999964,0.99251354,0.93922085,0.93715733
5
+ under,0.9656495,0.96753937,0.99251354,0.9999999,0.9394452,0.93831193
6
+ far,0.9295452,0.9298134,0.93922085,0.9394452,0.9999995,0.9991802
7
+ close,0.9282496,0.92841697,0.93715733,0.93831193,0.9991802,0.9999995
exp2a_modified/results/qwen/similarity_2m_L35_late.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,1.0000002,0.9965475,0.9649773,0.96030045,0.8989403,0.89823353
3
+ right,0.9965475,1.0,0.9598953,0.9629967,0.8915997,0.89078486
4
+ above,0.9649773,0.9598953,1.0000002,0.98364305,0.9208069,0.9149355
5
+ under,0.96030045,0.9629967,0.98364305,1.0,0.9093851,0.9072825
6
+ far,0.8989403,0.8915997,0.9208069,0.9093851,0.99999994,0.996667
7
+ close,0.89823353,0.89078486,0.9149355,0.9072825,0.996667,0.99999976
exp2a_modified/results/qwen/similarity_2m_L7_early.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999995,0.9999402,0.9966904,0.9964489,0.98790896,0.9878886
3
+ right,0.9999402,0.9999999,0.996633,0.9964943,0.98773515,0.987622
4
+ above,0.9966904,0.996633,0.99999964,0.99989176,0.9908798,0.9907012
5
+ under,0.9964489,0.9964943,0.99989176,1.0000005,0.9907862,0.9905167
6
+ far,0.98790896,0.98773515,0.9908798,0.9907862,1.0,0.999933
7
+ close,0.9878886,0.987622,0.9907012,0.9905167,0.999933,1.0000002
exp2a_modified/results/qwen/similarity_400k_L14_early_mid.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ ,left,right,above,under,far,close
2
+ left,0.9999994,0.99993646,0.9955939,0.99546295,0.97487485,0.9744284
3
+ right,0.99993646,0.9999994,0.9957069,0.9956356,0.97504747,0.9745439
4
+ above,0.9955939,0.9957069,0.9999995,0.99986076,0.9844377,0.98397595
5
+ under,0.99546295,0.9956356,0.99986076,1.0,0.9837175,0.98320484
6
+ far,0.97487485,0.97504747,0.9844377,0.9837175,0.9999999,0.9999202
7
+ close,0.9744284,0.9745439,0.98397595,0.98320484,0.9999202,1.0