gary-boon Claude Opus 4.5 commited on
Commit
f94a7ae
·
1 Parent(s): 383a328

Fix: Add attn_implementation="eager" to model switch function

Browse files

When switching models via /models/switch endpoint, the model was being
loaded without attn_implementation="eager", causing Devstral/Mistral
models to return None for attention weights. This broke the Research
Attention Analyzer with error:
"TypeError: object of type 'NoneType' has no len()"

Added trust_remote_code=True and attn_implementation="eager" to match
the initial model loading configuration.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

Files changed (1) hide show
  1. backend/model_service.py +3 -1
backend/model_service.py CHANGED
@@ -1245,7 +1245,9 @@ async def switch_model(request: Dict[str, Any], authenticated: bool = Depends(ve
1245
  manager.model = AutoModelForCausalLM.from_pretrained(
1246
  manager.model_name,
1247
  torch_dtype=torch.float16,
1248
- device_map="auto"
 
 
1249
  )
1250
 
1251
  # Create adapter
 
1245
  manager.model = AutoModelForCausalLM.from_pretrained(
1246
  manager.model_name,
1247
  torch_dtype=torch.float16,
1248
+ device_map="auto",
1249
+ trust_remote_code=True,
1250
+ attn_implementation="eager" # Required for output_attentions=True
1251
  )
1252
 
1253
  # Create adapter