Spaces:
Running
on
CPU Upgrade
Devstral + DGX Spark: Phased Implementation Plan
Incremental approach: prove infrastructure first, then add model support.
Overview
This plan breaks the Devstral + DGX Spark work into phases that can be validated independently:
- Phase 0: Secure GPU HF Space + verify basic routing (make private, add HF token auth, test auth works)
- Phase 0.5: Fix critical API route routing (backendFetch for key endpoints, prove GPU routing works)
- Phase 1: Deploy existing CodeGen to DGX Spark (prove Docker/GPU infrastructure)
- Phase 2: Add Devstral backend support, test correctness locally
- Phase 2b: Frontend dynamic layer handling
- Phase 2c: Wire Spark into frontend backend router + Deploy Devstral to GPU HF Space
- Phase 3: Deploy Devstral to DGX Spark
- Phase 4: Future enhancements (optional)
Existing Backend Routing Infrastructure
The frontend already has a sophisticated backend routing system that switches between multiple backends based on user settings and environment.
Current Architecture
File: visualisable-ai/lib/backend-router.ts
export type BackendTier = 'free' | 'premium' | 'research' | 'admin' | 'local';
export interface BackendConfig {
url: string;
wsUrl: string;
tier: BackendTier;
reason: string;
device: 'cpu' | 'gpu' | 'spark';
performance: { inferenceSpeed: string; concurrentUsers: string; };
}
Current Backend Targets:
| Target | URL | When Used |
|---|---|---|
| Local | localhost:8000 |
Local mode + Remote NOT enabled |
| CPU HuggingFace | visualisable-ai-api.hf.space |
Free tier (default) |
| GPU HuggingFace | visualisable-ai-api-gpu.hf.space |
Premium tier (gpuEnabled=true) |
Routing Logic (from getBackendForUser):
- Local mode + no Remote →
localhost:8000 - Local mode + Remote + GPU → GPU HF Space
- Local mode + Remote + no GPU → CPU HF Space
- Production + GPU → GPU HF Space
- Production + no GPU → CPU HF Space
Admin UI Controls
File: visualisable-ai/app/admin/users/page.tsx
Two toggles per user:
- GPU Access (
gpuEnabled): Routes to GPU HuggingFace Space - Remote (
backendOverride: 'remote'): In local mode, switches from localhost to HuggingFace
Environment Variables
NEXT_PUBLIC_MODE=local # Enables local mode (shows Remote toggle)
NEXT_PUBLIC_API_URL=http://localhost:8000 # Local backend URL
NEXT_PUBLIC_CPU_BACKEND_URL=... # CPU HuggingFace Space
NEXT_PUBLIC_GPU_BACKEND_URL=... # GPU HuggingFace Space
Current Gap: Server-Side API Routes
Issue: The backend-router.ts correctly determines the backend URL per-user, but many Next.js API routes use a hardcoded BACKEND_URL:
// These routes use hardcoded BACKEND_URL (NOT per-user routing):
// - /api/research/attention/analyze/route.ts
// - /api/proxy/[...path]/route.ts
// - /api/demos/route.ts
// - /api/vocabulary/search/route.ts
// etc.
const BACKEND_URL = process.env.BACKEND_URL || 'https://visualisable-ai-api.hf.space';
Result: Even if a user has gpuEnabled=true, server-side API routes still call the CPU Space.
Fix Required: API routes need to:
- Get current user via Clerk
- Call
getBackendForUser(user)to get the correct backend URL - Use that URL for the fetch
Resolution: Phase 0.5 fixes the critical /api/research/attention/analyze endpoint to prove routing works. Remaining routes are fixed in Phase 2c.
Phase 0: Secure GPU HF Space + Verify Existing Routing
Goal: Before adding Devstral/Spark support, secure the GPU HuggingFace Space to prevent unauthorized wake-ups and cost leakage, then verify the existing CPU/GPU routing works correctly.
The Problem
Even with API key protection, a public HuggingFace Space can be:
- Discovered - Anyone can find it on HuggingFace
- Woken up - Visiting the URL or hitting any endpoint (even returning 401) wakes a sleeping Space
- Kept awake - Repeated requests keep the GPU running and billing
With high-VRAM GPU tiers (L40S at ~$4/hr, A100 at ~$6/hr), this is a real cost risk.
0.1 Make GPU HF Space Private
On HuggingFace:
- Go to your GPU Space settings
- Change visibility from Public to Private
- This prevents discovery and unauthorized access
Note: Private Spaces require authentication via HuggingFace token.
Important caveat: Making the Space private will reduce random discovery and casual wake-ups, but note that any request that reaches the Space (even one returning 401 Unauthorized) can still wake it, depending on HuggingFace's behavior. Private is still the right move—it prevents casual discovery—but do not assume it is a perfect shield against all wake-ups. The sleep timeout (step 0.4) is the primary defense-in-depth measure.
0.2 Add Server-Side HF Token to Vercel
Add a server-side only HF token (no NEXT_PUBLIC_ prefix):
In Vercel Environment Variables:
HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxx
Generate this token at https://huggingface.co/settings/tokens with read access to your private Space.
Important: Do NOT use NEXT_PUBLIC_HF_TOKEN - that exposes the token to the client.
0.3 Create Server-Only Auth Module
Why a separate file? In Next.js, any code imported into client components can end up in the client bundle. backend-router.ts contains getBackendForUser() which may be imported for URL/tier decisions in client code. If we put process.env.HF_TOKEN in the same file, it risks being referenced from client bundles (even if tree-shaken, it's fragile).
Solution: Keep backend-router.ts as "pure decision logic" (URLs, tiers, reasons) and put all server-only headers in a separate module that is only imported from API routes.
File: visualisable-ai/lib/backend-auth.server.ts
import 'server-only'; // Next.js guard: errors if accidentally imported from client
/**
* Server-only authentication headers for backend requests.
*
* IMPORTANT: This file must ONLY be imported from Next.js API routes (server-side).
* Never import this from client components or shared code.
* The 'server-only' import above will cause a build error if this is violated.
*/
// Accept both env var names for backwards compatibility; standardise on API_KEY going forward
const API_KEY = process.env.API_KEY ||
process.env.BACKEND_API_KEY ||
'';
const HF_TOKEN = process.env.HF_TOKEN; // Server-side only, no NEXT_PUBLIC_ prefix
/**
* Get base authentication headers (API key only).
* Use this as the foundation, then add HF token conditionally based on target.
*/
export function getBaseAuthHeaders(): HeadersInit {
const headers: HeadersInit = {
'Content-Type': 'application/json',
};
if (API_KEY) {
headers['X-API-Key'] = API_KEY;
}
return headers;
}
/**
* Get HF-specific auth header (for private Spaces).
* Only attach this when the target is a HuggingFace Space.
*/
export function getHfAuthHeader(): HeadersInit {
return HF_TOKEN ? { Authorization: `Bearer ${HF_TOKEN}` } : {};
}
/**
* Check if a URL is a HuggingFace Space.
*/
export function isHfSpace(url: string): boolean {
return url.includes('.hf.space');
}
Update existing getBackendHeaders() in backend-router.ts:
Leave the existing function for backward compatibility, but remove any server-side secrets:
// backend-router.ts - keep as client-safe decision logic only
export function getBackendHeaders(): HeadersInit {
// Note: This function returns headers safe for client-side use.
// For server-side requests with API keys/tokens, use backend-auth.server.ts
return {
'Content-Type': 'application/json',
};
}
Rule: HF_TOKEN and API_KEY are only used in Next.js API routes (server), never in client code.
0.4 Configure Sleep Timeout (Defense in Depth)
On HuggingFace GPU Space settings:
- Set Sleep timeout to minimum (e.g., 5 minutes of inactivity)
- This reduces cost if the Space is somehow woken unexpectedly
Trade-off note for stakeholders: A 5-minute sleep timeout protects cost but increases cold starts. When a GPU-enabled user makes their first request after the Space has been sleeping, they will experience a delay while the Space wakes up (container restart + model load). For Devstral (~48GB), this cold start can take several minutes. Options to mitigate:
- Longer timeout (e.g., 15-30 minutes) - reduces cold starts but increases cost during idle periods
- "Keep warm" scheduled pings - a cron job that pings
/healthevery few minutes to prevent sleep (increases cost to ~continuous billing) - Accept cold starts - for research/premium users who understand the trade-off
Start with 5 minutes and adjust based on usage patterns and user feedback.
0.5 Verify Existing Routing Works
Before proceeding to Phase 1, verify the current CPU/GPU routing is working.
Note on user-specific tests: Tests 1 and 2 require testing "as a specific user" because routing depends on Clerk user metadata (gpuEnabled). The curl examples cannot easily reproduce this. Use one of these approaches:
Browser test (simplest): Log in as each user type and trigger the endpoint via the UI, then check backend logs to confirm which backend received the request.
Admin diagnostic endpoint (recommended for automation): Add a temporary
/api/debug/backend-routingendpoint that returns the backend URL chosen for the current user:// app/api/debug/backend-routing/route.ts import { currentUser } from '@clerk/nextjs/server'; import { getBackendForUser } from '@/lib/backend-router'; export async function GET() { const user = await currentUser(); const backend = getBackendForUser(user); return Response.json({ tier: backend.tier, url: backend.url, device: backend.device, userEmail: user?.emailAddresses?.[0]?.emailAddress }); }Then curl with a Clerk session cookie to test routing per-user.
Clerk session token in curl: If you have tooling to extract a Clerk session token, pass it in the request.
Test 1: CPU HF Space (free tier user)
# Option A: Browser test
# Log in as a user WITHOUT gpuEnabled, trigger analyze, check logs
# Option B: With diagnostic endpoint (if added)
# Log in as free tier user, then:
curl https://your-app.vercel.app/api/debug/backend-routing \
-H "Cookie: __session=<clerk_session_cookie>"
# Expected: tier=free, url=visualisable-ai-api.hf.space
Test 2: GPU HF Space (GPU-enabled user)
# Option A: Browser test
# Log in as a user WITH gpuEnabled=true, trigger analyze, check logs
# Option B: With diagnostic endpoint (if added)
# Log in as GPU-enabled user, then:
curl https://your-app.vercel.app/api/debug/backend-routing \
-H "Cookie: __session=<clerk_session_cookie>"
# Expected: tier=premium, url=visualisable-ai-api-gpu.hf.space
Test 3: Private Space rejects unauthenticated requests
# Direct request to GPU Space without token should fail
curl https://visualisable-ai-api-gpu.hf.space/health
# Expected: 401 Unauthorized or redirect to login (or HTML login page)
Test 4: Private Space accepts authenticated requests
# Direct request with HF token should succeed
curl -H "Authorization: Bearer hf_xxxx" \
https://visualisable-ai-api-gpu.hf.space/health
# Expected: 200 OK
Note on endpoint choice: These tests use /health. Verify your backend actually serves /health at the root. Some HF Space setups front a Gradio app or use a different path prefix. If /health doesn't exist, substitute any cheap "always exists" endpoint you know is served (even / or a simple status endpoint). The goal is to test auth, not the specific endpoint.
Note on private Space responses: A private Space may return a redirect or HTML login page rather than a neat JSON 401. Both indicate the unauthenticated request was rejected, which is what we want to verify.
0.6 Validation Criteria
- GPU HF Space set to Private on HuggingFace
-
HF_TOKEN(server-side only) added to Vercel environment variables -
lib/backend-auth.server.tscreated withgetBaseAuthHeaders(),getHfAuthHeader(),isHfSpace() -
getBackendHeaders()inbackend-router.tscleaned up (no secrets) - Sleep timeout configured on GPU Space (5 minutes recommended)
- Test: Direct unauthenticated request to GPU Space returns 401
- Test: Authenticated request via Vercel API routes succeeds
- Test: CPU HF Space still works for free tier users
- Test: GPU-enabled user requests route to GPU Space and succeed
- No changes to Devstral/Spark yet - existing CodeGen on both Spaces works
Phase 0.5: Fix Critical API Route Routing
Goal: Before investing in Spark infrastructure (Phase 1), fix the most critical API routes to use per-user backend routing. This gives you confidence that GPU routing actually works before paying for a bigger GPU tier.
Why now? Phase 0 verifies that the routing logic in getBackendForUser() is correct and that the private Space accepts authenticated requests. But many API routes still use hardcoded BACKEND_URL, so GPU-enabled users may not actually reach the GPU Space. Phase 0.5 fixes this gap for the key endpoints you use to validate.
0.5.1 Create Minimal backendFetch Helper
File: visualisable-ai/lib/backend-fetch.ts
This is the minimal helper for simple JSON POST calls. Proxy-style routes (method forwarding, query strings, binary bodies, streaming) will be handled separately in Phase 2c with a backendProxy helper.
import 'server-only'; // Prevent accidental client import
import { auth, currentUser } from '@clerk/nextjs/server';
import { getBackendForUser } from './backend-router';
import { getBaseAuthHeaders, getHfAuthHeader, isHfSpace } from './backend-auth.server';
/**
* Fetch from the backend appropriate for the current user.
*
* This helper:
* 1. Gets the current user via Clerk
* 2. Determines the correct backend (CPU HF, GPU HF, Spark, local)
* 3. Adds authentication headers (API key, and HF token only for HF targets)
*
* Use this in API routes instead of hardcoded BACKEND_URL.
*
* Note: For proxy-style routes that need method/query/body forwarding,
* use backendProxy() instead (added in Phase 2c).
*/
export async function backendFetch(
endpoint: string,
options: RequestInit = {}
): Promise<Response> {
const { userId } = await auth();
const user = userId ? await currentUser() : null;
const backend = getBackendForUser(user);
const url = `${backend.url}${endpoint}`;
return fetch(url, {
...options,
headers: {
...getBaseAuthHeaders(),
...(isHfSpace(backend.url) ? getHfAuthHeader() : {}),
...options.headers,
},
});
}
0.5.2 Update Critical Endpoints
Choose 1-2 endpoints that you actively use for testing routing:
Recommended: /api/research/attention/analyze (the main analyze endpoint)
File: visualisable-ai/app/api/research/attention/analyze/route.ts
import { NextRequest, NextResponse } from "next/server";
import { backendFetch } from "@/lib/backend-fetch";
export async function POST(request: NextRequest) {
try {
// For small JSON payloads like this, parse-then-stringify is fine.
// For large/binary/streaming payloads, use backendProxy instead.
const body = await request.json();
const { prompt, max_tokens, temperature } = body;
// Use backendFetch for per-user routing
const response = await backendFetch('/analyze/research/attention', {
method: 'POST',
body: JSON.stringify({
prompt,
max_tokens: max_tokens || 8,
temperature: temperature || 0.7
})
});
if (!response.ok) {
const error = await response.text();
throw new Error(`Backend error: ${error}`);
}
const data = await response.json();
return NextResponse.json(data);
} catch (error) {
console.error("Research attention analysis error:", error);
return NextResponse.json(
{ error: error instanceof Error ? error.message : "Analysis failed" },
{ status: 500 }
);
}
}
0.5.3 Validation
Re-run the Phase 0 user-specific tests with the updated endpoint:
Test: GPU-enabled user's request to /api/research/attention/analyze actually reaches GPU HF Space.
How to verify:
- Add temporary logging in the API route:
console.log('Routing to:', backend.url); - Or check GPU Space logs after triggering a request as a GPU-enabled user.
0.5.4 Validation Criteria
-
lib/backend-fetch.tscreated - At least one critical endpoint updated to use
backendFetch - Test: GPU-enabled user's analyze request reaches GPU HF Space (verified via logs)
- Test: Free tier user's analyze request still goes to CPU HF Space
- Remaining API route fixes deferred to Phase 2c (lower priority)
Phase 1: Deploy CodeGen to DGX Spark
Goal: Prove the Docker deployment infrastructure works with the existing CodeGen model.
1.1 Create Dockerfile
File: Dockerfile
# Bump with care, retest CUDA + torch compatibility
FROM nvcr.io/nvidia/pytorch:24.01-py3
WORKDIR /app
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "-m", "uvicorn", "backend.model_service:app", "--host", "0.0.0.0", "--port", "8000"]
1.2 Create Docker Compose
File: docker/compose.spark.yml
services:
visualisable-ai-backend:
build:
context: ..
dockerfile: Dockerfile
# container_name: visualisable-ai-backend # Uncomment for single-instance; leave commented for multi-branch
ports:
- "${PORT:-8000}:8000"
shm_size: "8gb"
volumes:
- ..:/app # Mount repo for dev hot-reload (requires --reload in command)
- /srv/models-cache/huggingface:/srv/models-cache/huggingface:rw # Writable HF cache
- ../runs:/app/runs # Outputs (relative to docker/ folder)
environment:
- HF_HOME=/srv/models-cache/huggingface
- TRANSFORMERS_CACHE=/srv/models-cache/huggingface
- DEFAULT_MODEL=${DEFAULT_MODEL:-codegen-350m}
- API_KEY=${API_KEY}
- HF_TOKEN=${HF_TOKEN}
- HUGGINGFACE_HUB_TOKEN=${HF_TOKEN}
# Operational tuning (included from day one for self-documentation)
- MAX_CONTEXT=${MAX_CONTEXT:-8192}
- BATCH_SIZE=${BATCH_SIZE:-1}
- TORCH_DTYPE=${TORCH_DTYPE:-fp16}
# Uncomment if experiencing CUDA memory fragmentation:
# - PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
gpus: all
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 3s
retries: 5
restart: unless-stopped
# Dev mode: uncomment to enable hot-reload
# command: ["python", "-m", "uvicorn", "backend.model_service:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
Notes:
/srv/models-cache/huggingfaceis the writable HF cache directory- No
/srv/modelsmount needed for Phase 1 (CodeGen downloads to cache) - Multiple branches: Use
PORTand Compose project names to avoid collisions:
PORT=8001 docker compose -p visai-branch-a -f docker/compose.spark.yml --env-file .env.spark up -d --build
PORT=8002 docker compose -p visai-branch-b -f docker/compose.spark.yml --env-file .env.spark up -d --build
1.3 Create Environment Template
File: .env.spark.example
# DGX Spark Environment Configuration
# Copy to .env.spark and fill in values
# Backend port
PORT=8000
# Default model to load
DEFAULT_MODEL=codegen-350m
# Note: fp16 is recommended for GPU runs (faster, lower VRAM).
# Use fp32 only when debugging numerical issues.
# API key for authentication (generate a secure random string)
API_KEY=your-api-key-here
# HuggingFace token (for gated models)
HF_TOKEN=your-hf-token-here
# Model cache location on Spark (must be writable)
HF_HOME=/srv/models-cache/huggingface
# Operational tuning for large models
MAX_CONTEXT=8192
BATCH_SIZE=1
TORCH_DTYPE=fp16
# Uncomment if experiencing CUDA memory fragmentation:
# PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512
1.4 Update .gitignore
File: .gitignore (append)
# Spark deployment
.env.spark
runs/*
!runs/.gitkeep
Create the runs directory with placeholder:
mkdir -p runs
touch runs/.gitkeep
git add runs/.gitkeep
This ensures the runs/ folder exists in fresh clones (required by compose.spark.yml volume mount ../runs:/app/runs).
Important: Commit runs/.gitkeep in the same PR as the .gitignore changes.
1.5 Ensure /health Returns Fast and Add Debug Endpoints
CRITICAL: The /health endpoint MUST return immediately (HTTP 200) even while the model is still loading. If it blocks on model load, Compose will mark the container unhealthy during slow Devstral downloads in Phase 3.
Check existing /health implementation:
- Should return
{"status": "ok"}immediately - Model loading status should be on a separate
/readyendpoint
If /health currently blocks, add a /ready endpoint:
/health→ process is up (always fast, always 200)/ready→ model is loaded and ready for inference- Return 200 when model is loaded and ready
- Return 503 when model is still loading (allows
watchto show clear state change)
Also add /debug/device in Phase 1 so validation can verify model placement without relying on logs:
cuda_available: whether CUDA is availablemodel_loaded: whether the model is loadedmodel_device: the device the model is ontorch_dtype: the dtype in usemodel_id: the loaded model ID
Security note: Do not return environment variables, tokens, or other secrets from /debug/device.
1.6 Spark Prep
On DGX Spark host:
# Create writable cache directory
sudo mkdir -p /srv/models-cache/huggingface
sudo chown -R root:dgx-ml /srv/models-cache
sudo chmod -R 2775 /srv/models-cache
# Clone repo
cd /srv/projects
git clone <repo> visualisable-ai-backend
cd visualisable-ai-backend
# Create env file
cp .env.spark.example .env.spark
vim .env.spark
1.7 Test CodeGen on Spark
# Build and run
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
# Check logs
docker compose -f docker/compose.spark.yml logs -f
# Verify GPU access (deterministic check, not relying on log wording)
docker compose -f docker/compose.spark.yml --env-file .env.spark exec visualisable-ai-backend \
python -c "import torch; print('CUDA available:', torch.cuda.is_available()); print('Device:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'no-cuda')"
# Test endpoints
curl http://spark-c691.local:8000/health
curl http://spark-c691.local:8000/ready # Returns 503 until model is loaded, then 200
curl -s http://spark-c691.local:8000/debug/device | python -m json.tool
curl -X POST http://spark-c691.local:8000/analyze/research/attention \
-H "Content-Type: application/json" \
-d '{"prompt": "def hello():", "max_tokens": 5}'
1.8 Validation Criteria
- Container starts and
/healthreturns 200 immediately (before model loads) -
/healthremains fast even during model download - CodeGen model loads successfully (check logs)
-
/readyreturns 200 after model is loaded -
/analyze/research/attentionreturns valid response - CUDA is available in container (
torch.cuda.is_available()returnsTrue) - Model device verified via
/debug/deviceendpoint -
.env.sparkis gitignored
Phase 2: Add Devstral Backend Support
Goal: Add Devstral model support and validate correctness. This is a correctness test, not a performance test.
2.1 Add MistralAdapter
File: backend/model_adapter.py
class MistralAdapter(ModelAdapter):
"""Adapter for Mistral-based models (Devstral, Mistral, etc.)"""
def _get_layers(self):
"""Defensive access: Mistral layers may be nested differently"""
if hasattr(self.model, 'model') and hasattr(self.model.model, 'layers'):
return self.model.model.layers
elif hasattr(self.model, 'layers'):
return self.model.layers
raise AttributeError("Cannot find transformer layers in Mistral model")
def get_num_layers(self) -> int:
return self.model.config.num_hidden_layers
def get_num_heads(self) -> int:
return self.model.config.num_attention_heads
def get_num_kv_heads(self) -> Optional[int]:
return getattr(self.model.config, 'num_key_value_heads', None)
def get_layer_module(self, layer_idx: int):
return self._get_layers()[layer_idx]
def get_attention_module(self, layer_idx: int):
return self._get_layers()[layer_idx].self_attn
def get_mlp_module(self, layer_idx: int):
return self._get_layers()[layer_idx].mlp
def get_qkv_projections(self, layer_idx: int):
attn = self.get_attention_module(layer_idx)
return attn.q_proj, attn.k_proj, attn.v_proj
Update factory:
def create_adapter(model, tokenizer, model_id):
config = get_model_config(model_id)
architecture = config["architecture"]
if architecture == "gpt_neox":
return CodeGenAdapter(model, tokenizer, model_id)
elif architecture == "llama":
return CodeLlamaAdapter(model, tokenizer, model_id)
elif architecture == "mistral":
return MistralAdapter(model, tokenizer, model_id)
else:
raise ValueError(f"Unsupported architecture: {architecture}")
2.2 Add Devstral to Model Config
File: backend/model_config.py
"devstral-small": {
"hf_path": "mistralai/Devstral-Small-2507",
"display_name": "Devstral Small 24B",
"architecture": "mistral",
"size": "24B",
"num_layers": 40,
"num_heads": 32,
"num_kv_heads": 8,
"vocab_size": 131072,
"context_length": 131072,
"attention_type": "grouped_query",
"requires_gpu": True, # Keep True to steer users to Spark
"min_vram_gb": 48.0,
"min_ram_gb": 96.0
}
Note: requires_gpu: True remains set to guide users toward Spark. CPU inference is technically possible on Mac Studio (512GB RAM) but is painfully slow and not recommended for regular use.
2.3 Fix Hardcoded Layer Classification
File: backend/model_service.py (~line 1505)
# Fixed (percentage-based, 1-indexed fraction for transformer blocks):
layer_fraction = (layer_idx + 1) / n_layers
if layer_idx == 0:
layer_pattern = {"type": "positional", ...}
elif layer_fraction <= 0.25:
layer_pattern = {"type": "previous_token", ...}
elif layer_fraction <= 0.75:
layer_pattern = {"type": "induction", ...}
else:
layer_pattern = {"type": "semantic", ...}
2.4 Wire Env Vars into Model Loader
File: backend/model_service.py (in load_model() or ModelManager.__init__)
Ensure the backend reads and applies these environment variables:
MAX_CONTEXT: caps input truncation (tokenizer max_length). If requests includemax_new_tokens, do not silently override it unless you explicitly want global caps—this prevents confusion when callers expect per-request control.BATCH_SIZE: wire in where applicable; otherwise leave as reserved for future batching (only meaningful if the service implements request batching)TORCH_DTYPE: map string to dtype:bf16→torch.bfloat16fp16→torch.float16fp32→torch.float32
2.5 Add /models and /models/current Endpoints
File: backend/model_service.py
These endpoints are required by the frontend (Phase 2b.4) and for validation (Phase 2c). Add them as explicit Phase 2 deliverables:
GET /models - List available models:
@app.get("/models")
def list_models():
"""Return list of models this backend can serve."""
return {
"models": [
{
"id": model_id,
"name": config["display_name"],
"available": is_model_available(model_id), # Check VRAM, etc.
"requires_gpu": config.get("requires_gpu", False)
}
for model_id, config in SUPPORTED_MODELS.items()
]
}
GET /models/current - Return currently loaded model:
@app.get("/models/current")
def current_model():
"""Return info about the currently loaded model."""
if not model_manager.model_loaded:
return {"id": None, "device": None, "dtype": None}
return {
"id": model_manager.model_id,
"device": str(model_manager.device),
"dtype": str(model_manager.dtype)
}
Why explicit deliverables? Phase 2c validation depends on these endpoints. Making them "if missing" creates ambiguity. By adding them in Phase 2, the frontend work in 2b and validation in 2c can proceed cleanly.
2.6 Local Validation (Correctness Only)
Option A: Full load on Mac Studio (slow, ~96GB RAM needed)
export DEFAULT_MODEL=devstral-small
export HF_TOKEN=your-token-here
python -m uvicorn backend.model_service:app --host 0.0.0.0 --port 8000
# Test (will be VERY slow on CPU)
curl -X POST http://localhost:8000/analyze/research/attention \
-H "Content-Type: application/json" \
-d '{"prompt": "def hello():", "max_tokens": 2}'
Option B: Unit test without full model load
Write a test that:
- Loads model config, verifies 40 layers
- Checks MistralAdapter layer access pattern
- Validates layer classification fractions
2.7 Validation Criteria
- Devstral config added to SUPPORTED_MODELS
- MistralAdapter correctly accesses layers
- Layer classification works for 40-layer model (percentage-based)
- Env vars (MAX_CONTEXT, BATCH_SIZE, TORCH_DTYPE) are wired into loader
-
/modelsendpoint returns list of available models -
/models/currentendpoint returns currently loaded model info - One successful endpoint call (correctness, not performance)
Phase 2b: Frontend Dynamic Layer Handling
Goal: Update frontend to handle models with different layer counts and vocab sizes.
2b.1 Fix Stage Boundaries
File: components/research/VerticalPipeline.tsx
Replace hardcoded layer boundaries with percentage-based:
// Current (hardcoded for 20 layers):
const getStageInfo = (layerIdx: number) => {
if (layerIdx === 0) return { color: 'yellow', label: 'EMBEDDING' };
if (layerIdx <= 5) return { color: 'green', label: 'EARLY' };
if (layerIdx <= 14) return { color: 'blue', label: 'MIDDLE' };
if (layerIdx <= 19) return { color: 'purple', label: 'LATE' };
return { color: 'orange', label: 'OUTPUT' };
};
// Fixed (percentage-based):
const getStageInfo = (layerIdx: number, totalLayers: number) => {
if (layerIdx === 0) return { color: 'yellow', label: 'EMBEDDING' };
const fraction = layerIdx / totalLayers;
if (fraction <= 0.25) return { color: 'green', label: 'EARLY' };
if (fraction <= 0.75) return { color: 'blue', label: 'MIDDLE' };
return { color: 'purple', label: 'LATE' };
};
Update layer slice operations:
const earlyEnd = Math.floor(numLayers * 0.25);
const middleEnd = Math.floor(numLayers * 0.75);
// EARLY LAYERS
{layersData.slice(1, earlyEnd + 1).map(...)}
// MIDDLE LAYERS
{layersData.slice(earlyEnd + 1, middleEnd + 1).map(...)}
// LATE LAYERS (JS slice end is exclusive)
{layersData.slice(middleEnd + 1, numLayers + 1).map(...)}
2b.2 Fix Hardcoded Vocabulary Display
File: components/research/VerticalPipeline.tsx (line ~305)
Replace (51,200 tokens) with dynamic value from modelInfo.vocabSize.
2b.3 Fix Hardcoded head_dim
File: components/research/SpreadsheetGrid.tsx (if exists)
Replace const dHead = 64 with dynamic calculation:
const dHead = modelInfo.hiddenSize / modelInfo.numHeads;
if (!Number.isInteger(dHead)) {
console.warn("Non-integer head_dim", { hiddenSize: modelInfo.hiddenSize, numHeads: modelInfo.numHeads });
}
2b.4 Dynamic Model List from Backend
If the frontend model selector is a static list, update it to populate dynamically from the backend /models endpoint (or similar). This ensures:
- Models only appear when actually available on the connected backend
- Devstral only shows when connected to Spark (not HuggingFace)
If the frontend already fetches supported_models from the backend, this is naturally handled.
2b.5 Validation Criteria
- Stage boundaries work correctly for 40-layer model
- Vocab display shows correct value for each model
- head_dim calculated dynamically (if applicable)
- UI renders correctly with both CodeGen (20 layers) and Devstral (40 layers)
- Model selector only shows models available on the connected backend (requires Phase 2c for full test)
Phase 2c: Wire Spark into Frontend Backend Router
Goal: Add DGX Spark as a fourth backend option in the existing routing infrastructure, and fix server-side API routes to respect per-user backend selection.
Dependency: Phase 2 must be complete (Devstral support merged, /models and /models/current endpoints added) before enabling Devstral as the DEFAULT_MODEL on the GPU HF Space.
Important Network Constraint
Spark is a local-network-only backend. The hostname spark-c691.local is only resolvable on your local network (mDNS).
| Environment | Can reach Spark? | Notes |
|---|---|---|
| Local dev (your machine) | ✅ Yes | Same LAN as Spark |
| Vercel production | ❌ No | Cannot resolve .local hostnames |
| HuggingFace Spaces | ❌ No | Cannot resolve .local hostnames |
Implications:
- Spark toggle is a developer/research feature for local mode only
- Production GPU users should use the GPU HuggingFace Space (via
gpuEnabledtoggle) - Do NOT expose Spark to the public internet without proper security (VPN, auth, etc.)
Spark authentication: Spark requests are authenticated via X-API-Key header (same as local backend). The HF token is only for .hf.space targets and is not sent to Spark. For additional security, consider network-level protection (VPN/Tailscale), but API key alone is sufficient for LAN-only access.
Fallback when Spark is unreachable: No automatic fallback initially; fail fast with a user-visible error message and a quick toggle to switch to Remote/Local. This keeps behaviour predictable—users should always know which backend they are hitting. Automatic fallback could be added later if needed, but explicit is safer for v1.
Important: Production deployments (Vercel) must NOT set NEXT_PUBLIC_MODE=local, otherwise Spark routing could incorrectly activate. Only set this in local development .env.local files.
Backend routing summary:
| Toggle | Production (Vercel) | Local Mode |
|---|---|---|
| Neither | CPU HF Space | localhost:8000 |
| Remote | CPU HF Space | CPU HF Space |
| Remote + GPU | GPU HF Space | GPU HF Space |
| Spark | ❌ Invalid | spark-c691.local:8000 |
2c.1 Update Backend Router
File: visualisable-ai/lib/backend-router.ts
Add Spark URL constant:
const SPARK_BACKEND_URL = process.env.NEXT_PUBLIC_SPARK_BACKEND_URL ||
'http://spark-c691.local:8000';
Update BackendConfig.device type to include Spark:
device: 'cpu' | 'gpu' | 'spark';
Add helper for safe WebSocket URL construction:
function toWsUrl(httpUrl: string, wsPath: string = '/ws'): string {
try {
const url = new URL(httpUrl);
url.protocol = url.protocol === 'https:' ? 'wss:' : 'ws:';
url.pathname = url.pathname.replace(/\/$/, '') + wsPath;
return url.toString();
} catch {
// Fallback for malformed URLs
return httpUrl.replace(/^https:/, 'wss:').replace(/^http:/, 'ws:') + wsPath;
}
}
Note: All current backends (localhost, HuggingFace Spaces, Spark) use /ws as the WebSocket path. If a future backend uses a different path, pass it as the second argument.
Update getBackendForUser to handle Spark routing:
export function getBackendForUser(user: User | null): BackendConfig {
const isLocalMode = process.env.NEXT_PUBLIC_MODE === 'local';
const localBackendUrl = process.env.NEXT_PUBLIC_API_URL || 'http://localhost:8000';
// Check user settings
const hasRemoteOverride = user?.unsafeMetadata?.backendOverride === 'remote';
const hasSparkOverride = user?.unsafeMetadata?.backendOverride === 'spark';
const hasGPUAccess = user?.unsafeMetadata?.gpuEnabled === true;
// SPARK MODE: Only valid in local mode (Spark not reachable from Vercel)
// Spark toggle is a developer/research feature for local network only
if (hasSparkOverride && isLocalMode) {
return {
url: SPARK_BACKEND_URL,
wsUrl: toWsUrl(SPARK_BACKEND_URL),
tier: 'research',
reason: 'DGX Spark backend (local network)',
device: 'spark',
performance: {
inferenceSpeed: '50-200ms',
concurrentUsers: '10+'
}
};
}
// LOCAL MODE: Check if we should use localhost
if (isLocalMode && !hasRemoteOverride) {
return {
url: localBackendUrl,
wsUrl: toWsUrl(localBackendUrl),
tier: 'local' as BackendTier,
reason: 'Local development',
device: 'cpu',
performance: {
inferenceSpeed: 'Variable (local)',
concurrentUsers: 'Unlimited (local)'
}
};
}
// ... rest of existing logic (GPU HF, CPU HF)
}
Note: Spark routing is gated by isLocalMode - even if a user has backendOverride: 'spark' in production, it will fall through to the HuggingFace backends.
Optional extra safety: If you want to ensure Spark is never accidentally chosen server-side (e.g., during SSR in local mode), add a client-side check:
if (hasSparkOverride && isLocalMode && typeof window !== 'undefined') {
// Only route to Spark from client-side code
}
This is optional since SSR typically doesn't make backend calls, but provides defense-in-depth.
Belt-and-braces option: Since NEXT_PUBLIC_MODE is baked into the client bundle at build time, you could add a runtime hostname check as additional defense:
const isLocalHost = typeof window !== 'undefined' &&
(window.location.hostname === 'localhost' || window.location.hostname === '127.0.0.1');
if (hasSparkOverride && isLocalMode && isLocalHost) {
// Spark only available when actually running locally
}
This prevents Spark routing even if someone accidentally deploys a local-mode build.
2c.2 Update Admin UI
File: visualisable-ai/app/admin/users/page.tsx
Add a third toggle for Spark backend with mutual exclusivity (enabling Spark clears Remote, and vice versa):
const toggleSparkBackend = async (userId: string, currentValue: boolean) => {
const user = users.find(u => u.id === userId);
if (!user) return;
const newValue = !currentValue;
// Optimistically update UI - clear Remote if enabling Spark
setUsers(prevUsers => prevUsers.map(u => {
if (u.id === userId) {
return {
...u,
unsafeMetadata: {
...u.unsafeMetadata,
// Mutual exclusivity: Spark and Remote cannot both be set
backendOverride: newValue ? 'spark' : undefined
}
};
}
return u;
}));
// ... API call to persist (same pattern as toggleRemoteBackend)
};
// Also update toggleRemoteBackend to clear Spark when enabling Remote:
const toggleRemoteBackend = async (userId: string, currentValue: boolean) => {
// ... existing code ...
// Mutual exclusivity: backendOverride can only be 'remote', 'spark', or undefined
backendOverride: newValue ? 'remote' : undefined
};
Only show Spark toggle in local mode (it's not useful in production):
{isLocalMode && (
<th className="px-6 py-3 text-left text-xs font-medium text-gray-400 uppercase tracking-wider">
Spark
</th>
)}
// In row:
{isLocalMode && (
<td className="px-6 py-4 whitespace-nowrap">
<button
onClick={() => toggleSparkBackend(user.id, hasSparkOverride)}
className={`relative inline-flex h-6 w-11 items-center rounded-full transition-colors cursor-pointer hover:opacity-80 ${
hasSparkOverride ? 'bg-orange-600' : 'bg-gray-700'
}`}
title="Use DGX Spark backend (requires local network access)"
>
<span className={`inline-block h-4 w-4 transform rounded-full bg-white transition-transform ${
hasSparkOverride ? 'translate-x-6' : 'translate-x-1'
}`} />
</button>
</td>
)}
2c.3 Fix Server-Side API Routes
Critical: Some API routes bypass per-user routing by using hardcoded BACKEND_URL.
Routes already correct (use getBackendForUser() + getBackendHeaders()):
app/api/generate/route.ts✅app/api/swe-bench/route.ts✅
Routes to update:
app/api/research/attention/analyze/route.tsapp/api/proxy/[...path]/route.tsapp/api/demos/route.tsapp/api/demos/run/route.tsapp/api/vocabulary/search/route.tsapp/api/vocabulary/browse/route.tsapp/api/token/metadata/route.tsapp/api/backend/[...path]/route.ts
Pattern to apply:
By Phase 2c, lib/backend-fetch.ts already exists (created in Phase 0.5). Use the appropriate helper:
backendFetch(endpoint, options)- For simple JSON POST calls (most routes)backendProxy(request, endpointPath)- For pass-through proxy routes (added below)
For simple JSON routes:
import { backendFetch } from '@/lib/backend-fetch';
export async function POST(request: NextRequest) {
const body = await request.json();
const response = await backendFetch('/some/endpoint', {
method: 'POST',
body: JSON.stringify(body)
});
// ...
}
For proxy routes (e.g., /api/proxy/[...path], /api/backend/[...path]):
Add backendProxy to lib/backend-fetch.ts (extending the file created in Phase 0.5):
// lib/backend-fetch.ts - ADD to existing file (imports already present from Phase 0.5)
// Add this import at the top:
import { NextRequest } from 'next/server';
/**
* Proxy a request to the backend with full pass-through.
*
* Handles:
* - Method forwarding (GET, POST, PUT, DELETE, etc.)
* - Query string forwarding
* - Body forwarding (including binary)
* - Header pass-through (excluding hop-by-hop headers)
* - Returns raw Response for streaming
*
* Use for catch-all proxy routes like /api/proxy/[...path].
*
* @param request - The incoming Next.js request
* @param endpointPath - Path to forward to (must NOT include query string)
*/
export async function backendProxy(
request: NextRequest,
endpointPath: string
): Promise<Response> {
const { userId } = await auth();
const user = userId ? await currentUser() : null;
const backend = getBackendForUser(user);
// Build URL with query string from original request
// Note: endpointPath should be a clean path without query string
const url = new URL(endpointPath, backend.url);
url.search = request.nextUrl.search;
// Headers to exclude:
// - hop-by-hop headers (not meant to be forwarded)
// - auth headers (we add our own server-side auth, don't leak client tokens)
// - proxy/CDN headers (avoid confusing upstream, keep logs clean)
// - content-length (let fetch recalculate for streaming body)
const excludeHeaders = new Set([
'host', 'connection', 'keep-alive', 'transfer-encoding',
'te', 'trailer', 'upgrade', 'proxy-authorization', 'proxy-authenticate',
'authorization', 'cookie', // Don't forward client auth to backend
'x-forwarded-for', 'x-forwarded-proto', 'x-forwarded-host', // Proxy headers
'cf-connecting-ip', 'cf-ray', 'cf-ipcountry', // Cloudflare headers
'content-length' // Let fetch set this for streaming body
]);
// Forward headers (except hop-by-hop)
const forwardHeaders: HeadersInit = {};
request.headers.forEach((value, key) => {
if (!excludeHeaders.has(key.toLowerCase())) {
forwardHeaders[key] = value;
}
});
// Merge with auth headers (auth headers take precedence)
// Only attach HF token for HuggingFace Space targets
const headers = {
...forwardHeaders,
...getBaseAuthHeaders(),
...(isHfSpace(backend.url) ? getHfAuthHeader() : {}),
};
// Forward body for methods that have one
const hasBody = !['GET', 'HEAD'].includes(request.method);
const body = hasBody ? request.body : undefined;
return fetch(url.toString(), {
method: request.method,
headers,
body,
// @ts-expect-error: duplex is required for streaming body but not in types
duplex: hasBody ? 'half' : undefined,
});
}
Usage in proxy routes:
import { NextRequest } from 'next/server';
import { backendProxy } from '@/lib/backend-fetch';
// IMPORTANT: Use Node runtime for streaming body support (duplex: 'half')
export const runtime = 'nodejs';
// app/api/proxy/[...path]/route.ts
export async function GET(request: NextRequest, { params }: { params: { path: string[] } }) {
// params.path is clean (no query string) - query comes from request.nextUrl.search
const endpointPath = '/' + params.path.join('/');
return backendProxy(request, endpointPath);
}
export async function POST(request: NextRequest, { params }: { params: { path: string[] } }) {
const endpointPath = '/' + params.path.join('/');
return backendProxy(request, endpointPath);
}
// ... same for PUT, DELETE, etc.
Implementation notes:
- Runtime requirement: All routes using
backendProxymust useexport const runtime = 'nodejs'becauserequest.bodystreaming withduplex: 'half'requires Node (not Edge). This includes/api/proxy/[...path],/api/backend/[...path], and any other catch-all proxy routes. - Authentication is centralized: Both helpers use
getBaseAuthHeaders()(API key) and conditionally addgetHfAuthHeader()(HF token) based onisHfSpace()check. - HF token only for HF backends: The
isHfSpace()check ensures the HF token is only sent to.hf.spaceURLs. This keeps Spark and localhost logs clean and avoids sending credentials to non-HF targets. - Streaming works automatically:
backendProxyreturns the rawResponsewithout consuming the body. - Body handling: Uses
request.bodydirectly (ReadableStream) withduplex: 'half'for streaming request bodies.
2c.4 Add Environment Variables
File: visualisable-ai/.env.local (local development only)
# DGX Spark backend URL (for local network access)
NEXT_PUBLIC_SPARK_BACKEND_URL=http://spark-c691.local:8000
# Enable local mode (shows Spark toggle, allows localhost backend)
NEXT_PUBLIC_MODE=local
File: visualisable-ai/.env.example (document but don't set values)
# DGX Spark backend URL (for local network access)
# NEXT_PUBLIC_SPARK_BACKEND_URL=http://spark-c691.local:8000
# Local mode - ONLY set in .env.local, NEVER in production
# NEXT_PUBLIC_MODE=local
⚠️ CRITICAL: Do NOT define NEXT_PUBLIC_MODE in Vercel
This is a belt-and-braces safety measure:
- Only define
NEXT_PUBLIC_MODE=localin.env.local(local development) - Never add it to Vercel environment variables
- This makes accidental Spark exposure impossible, even if someone toggles user metadata incorrectly
If NEXT_PUBLIC_MODE is undefined in production, Spark routing is disabled regardless of user settings.
2c.5 Update TierIndicator (Optional)
File: visualisable-ai/components/TierIndicator.tsx
Add Spark-specific display if the component shows current backend:
if (device === 'spark') {
return { icon: <Cpu />, label: 'Spark', color: 'orange' };
}
2c.6 Toggle Behavior Notes
The three toggles should be mutually exclusive for backendOverride:
- Remote →
backendOverride: 'remote'(uses HuggingFace) - Spark →
backendOverride: 'spark'(uses DGX Spark, local mode only) - Neither →
backendOverride: undefined(uses localhost in local mode)
GPU Access remains independent—it controls which HuggingFace Space to use when Remote is enabled.
The code in 2c.2 handles mutual exclusivity by using a single backendOverride field that can only hold one value.
2c.7 Verify /models Endpoints (Added in Phase 2)
The frontend model selector (Phase 2b.4) depends on the /models and /models/current endpoints added in Phase 2.5. Verify these endpoints work correctly on all backends and return:
{
"models": [
{
"id": "codegen-350m",
"name": "CodeGen 350M",
"available": true,
"requires_gpu": false
},
{
"id": "devstral-small",
"name": "Devstral Small 24B",
"available": true,
"requires_gpu": true
}
]
}
Model availability by backend:
| Model | CPU HF Space | GPU HF Space | Spark |
|---|---|---|---|
| CodeGen | ✅ available (default) | ✅ available | ✅ available |
| Devstral | ❌ unavailable | ✅ available (default) | ✅ available |
Production model strategy:
- CPU HF Space: CodeGen only (free tier users)
- GPU HF Space: Devstral as default (GPU-enabled users get Devstral automatically)
- Spark: Both models available (local development/research)
Verify /models/current endpoint (added in Phase 2.5) returns the currently loaded model:
{
"id": "devstral-small",
"device": "cuda",
"dtype": "bf16"
}
This is used for:
- Frontend to know which model is active without parsing
/modelslist - Debugging to quickly verify which model a backend is running
- The model_id acceptance test in Phase 2c validation
2c.8 Configure GPU HuggingFace Space for Devstral
Prerequisites: The GPU HF Space must have sufficient hardware to run Devstral.
Minimum hardware:
- L40S (48GB VRAM) - minimum viable
- A100 (80GB VRAM) - recommended for headroom
Environment configuration for GPU HF Space:
DEFAULT_MODEL=devstral-small
TORCH_DTYPE=bf16
How it works:
- User has
gpuEnabled=truein their profile - Frontend router sends requests to GPU HF Space URL
- GPU HF Space has
DEFAULT_MODEL=devstral-small, so Devstral loads on startup /modelsendpoint returnsdevstral-smallwithavailable: true- User automatically uses Devstral without touching model selector
Backend decides default (Approach 1 - recommended):
The simplest approach is to let each backend decide its own default model via DEFAULT_MODEL environment variable:
- CPU HF Space:
DEFAULT_MODEL=codegen-350m - GPU HF Space:
DEFAULT_MODEL=devstral-small
No frontend logic needed - GPU-enabled users automatically get Devstral because that's what the GPU backend loads.
Important: Frontend must not force a model_id
For this to work, the frontend must NOT hardcode model_id=codegen-350m in API requests. Either:
- Omit
model_idfrom requests entirely - backend usesDEFAULT_MODEL - Use backend's reported default - fetch from
/models/currentor/modelsendpoint - Respect user selection - if user explicitly picks a model, use that
Check existing API calls (e.g., /analyze/research/attention, /generate) to ensure they don't always send a static model_id. If they do, update them to omit it or use the backend's default.
Verification steps (do these in 2c-Step-1):
- Grep for hardcoded model_id: Search the Next.js app for
model_id,codegen, andcodegen-350mto find any hardcoded references. - Check backend default behaviour: Confirm the backend uses
DEFAULT_MODELwhenmodel_idis omitted from requests. Test with a curl that omitsmodel_idand verify it uses the expected default.
2c.9 HuggingFace Space Deployment Mechanics
How deployment works: The backend is deployed to HuggingFace Spaces via GitHub Actions.
- Repository: Backend code lives in
visualisable-ai-backendrepo - Trigger: Push to
mainbranch triggers GitHub Actions workflow - Workflow:
.github/workflows/security-check.yml(job:deploy-to-huggingface) pushes code to both HF Space git remotes - Space rebuild: HuggingFace automatically rebuilds the Space when it receives the push
Current deployment targets:
- CPU Space:
visualisable-ai/api→https://huggingface.co/spaces/visualisable-ai/api - GPU Space:
visualisable-ai/api-gpu→https://huggingface.co/spaces/visualisable-ai/api-gpu
Key files:
.github/workflows/security-check.yml- security checks + deployment workflowDockerfile- HF Space build configuration (already exists in repo root)- Space settings on HuggingFace - environment variables, hardware tier, visibility
To deploy Devstral to GPU HF Space:
- Ensure Phase 2 changes (Devstral support) are merged to
main - GitHub Actions deploys to the Space automatically
- In HuggingFace Space settings:
- Set
DEFAULT_MODEL=devstral-small - Set
TORCH_DTYPE=bf16 - Upgrade hardware tier to L40S (48GB) or A100 (80GB)
- Ensure Space is Private (from Phase 0)
- Set
- Space rebuilds and loads Devstral on startup
Secrets configuration:
- HuggingFace Space variables are set in Space Settings > Variables
- GitHub Actions secrets (for pushing to HF) are in repo Settings > Secrets
- Vercel env vars (for API routes) are separate from HF Space vars
2c.10 Recommended Implementation Order
To reduce risk, implement Phase 2c in two sub-steps:
2c-Step-1: Fix per-user routing (CPU HF vs GPU HF)
- Create
lib/backend-fetch.tshelper - Update all API routes to use
backendFetch - Test: GPU toggle correctly routes to GPU HuggingFace Space
- This is pure production correctness, no new features
2c-Step-2: Add Spark as extra backend option
- Add Spark to
backend-router.ts(gated by local mode) - Add Spark toggle to admin UI (local mode only)
- Test: Spark toggle routes to
spark-c691.local:8000 - This is a local-only developer feature
2c.11 Validation Criteria
Step 1 (Production correctness):
-
lib/backend-fetch.tshelper created (withbackendProxyfor proxy routes) - All proxy routes have
export const runtime = 'nodejs' - All API routes updated to use per-user backend routing (no more hardcoded
BACKEND_URL) - Grep verification: No hardcoded
model_id=codegen-350mfound in frontend code - Backend verification: Backend uses
DEFAULT_MODELwhenmodel_idis omitted (test with curl) - Acceptance test: Enable GPU toggle (with Remote), confirm requests go to GPU HuggingFace Space
-
/modelsendpoint exists on backend and returns available models - GPU HF Space configured with
DEFAULT_MODEL=devstral-smalland sufficient VRAM (L40S minimum) - Acceptance test: GPU-enabled user in production automatically uses Devstral (no model selector interaction needed)
- Acceptance test (model_id verification): As a GPU-enabled user in production:
- Call
/models/currentvia your Vercel API route (or hit GPU HF Space directly with auth) - Expect:
id=devstral-small,device=cuda,dtype=bf16 - This proves no hidden
model_id=codegen-350mis being sent and Devstral is active
- Call
Step 2 (Spark local-only feature):
-
NEXT_PUBLIC_SPARK_BACKEND_URLenvironment variable added - Backend router recognizes
backendOverride: 'spark'(only in local mode) - Admin UI shows Spark toggle (only in local mode)
- Spark toggle is mutually exclusive with Remote toggle
- TierIndicator shows correct status for Spark connection
- Acceptance test (local mode): Enable Spark toggle, confirm requests go to
spark-c691.local:8000 - Acceptance test (local mode): Switch between Local/Remote/Spark, confirm correct backend is used each time
- Acceptance test (production): Spark toggle has no effect (falls through to HuggingFace)
Phase 3: Deploy Devstral to DGX Spark
Goal: Run Devstral on DGX Spark with GPU acceleration (BF16).
3.1 Update Spark Environment
# On Spark, update .env.spark
DEFAULT_MODEL=devstral-small
TORCH_DTYPE=bf16
MAX_CONTEXT=8192
BATCH_SIZE=1
3.2 Rebuild and Deploy
cd /srv/projects/visualisable-ai-backend
git pull
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
3.3 Monitor First Load
First load will download ~48GB model weights. Monitor with:
# Watch logs
docker compose -f docker/compose.spark.yml logs -f
# Check health (should return fast even during download)
watch -n 5 'curl -s http://spark-c691.local:8000/health'
# Check readiness (will fail until model loaded)
watch -n 10 'curl -s http://spark-c691.local:8000/ready'
Note: First download can take a significant amount of time depending on network speed. Disk usage will spike in /srv/models-cache/huggingface during download (~48GB for Devstral weights). Ensure sufficient disk space is available before starting.
3.4 Verify Model on GPU
Use the /debug/device endpoint (added in Phase 1.5) to verify the model is on GPU:
curl -s http://spark-c691.local:8000/debug/device | python -m json.tool
Expected response should show model_device: "cuda:0" (or similar CUDA device).
Why not python -c exec? Importing the module in a separate process creates a fresh manager instance with no model loaded—it won't reflect the state of the running Uvicorn process. An HTTP endpoint queries the actual running service.
3.5 Validation Criteria
-
/healthreturns 200 fast even during model download -
/readyreturns 200 after model is loaded - Devstral loads on GPU (verified via deterministic check, not just logs)
- Memory usage is ~48GB VRAM (BF16)
- Inference is fast (GPU-accelerated, <5s for small prompts)
- Analysis endpoint works with Devstral
- Frontend displays 40 layers correctly with proper stage labels
Phase 4: Future Enhancements (Optional)
Note: Devstral on GPU HuggingFace Space is now a required part of Phase 2c (for GPU-enabled production users). This phase covers additional optional enhancements.
4.1 Runtime Model Switching
Current approach: One-model-per-deployment. Each backend loads a single model on startup via DEFAULT_MODEL environment variable. This is simpler and keeps memory predictable.
Future option: Add POST /models/load endpoint for runtime model switching:
@app.post("/models/load")
def load_model(model_id: str):
"""Load a different model at runtime."""
# Unload current model
# Load new model
# Return new model info
Trade-offs:
- Useful for research (switch models without redeploying)
- Adds complexity: queueing, load state management, eviction, edge cases (requests arriving mid-load)
- Memory management becomes more complex with multiple large models
Recommendation: Keep one-model-per-deployment for v1. Add runtime switching only if there's a clear need.
4.2 Quantized Devstral Variant
Not applicable for this project. This is PhD research requiring full-precision BF16 for accurate attention pattern analysis. Quantization introduces numerical artifacts that would compromise research validity.
For reference, if quantization were acceptable:
- 4-bit GPTQ or AWQ quantization reduces VRAM to ~12-16GB
- Allows running on smaller GPU tiers (T4, L4)
- Trade-off: quality loss makes this unsuitable for research purposes
4.3 Additional Deployment Targets
Other optional deployment options:
- Third HF Space for specific use cases (e.g., research-only access)
- Self-hosted Kubernetes with auto-scaling
- Modal/RunPod for burst capacity
4.4 Entrypoint Consistency
The codebase has two service paths:
- Spark backend:
backend.model_service:appon port 8000 - HuggingFace wrapper:
app:appon port 7860
If adding new deployment targets, ensure they use consistent entrypoints and expose the same API surface.
Rollback Procedures
If a deployment fails or causes issues, use these rollback procedures:
HuggingFace Space Rollback
Option A: Revert via GitHub
- Revert the problematic commit on
mainbranch - Push the revert - GitHub Actions will redeploy the previous version
- In HF Space settings, change
DEFAULT_MODELback if needed
Option B: Manual Space revert
- Go to HuggingFace Space > Files > History
- Find the last known good commit
- Click "Revert to this version"
- Update environment variables if needed
Option C: Change model without redeploying
- In HF Space settings, change
DEFAULT_MODEL=codegen-350m - Restart the Space (Settings > Restart)
- Space will reload with CodeGen instead of Devstral
DGX Spark Rollback
Quick rollback (change model):
# On Spark host
cd /srv/projects/visualisable-ai-backend
# Edit .env.spark to change DEFAULT_MODEL
vim .env.spark
# Change: DEFAULT_MODEL=codegen-350m
# Restart container
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d
Full rollback (previous code version):
# On Spark host
cd /srv/projects/visualisable-ai-backend
# Find the last known good commit
git log --oneline -10
# Reset to that commit
git checkout <commit-hash>
# Rebuild and restart
docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --build
Rollback to previous Docker image (if tagged):
# If you tagged the previous working image
docker compose -f docker/compose.spark.yml --env-file .env.spark down
docker run -d --gpus all -p 8000:8000 --env-file .env.spark visualisable-ai-backend:last-known-good
Monitoring
Lightweight monitoring approach for v1:
Health Checks
All backends expose /health (process alive) and /ready (model loaded):
# Quick status check
curl -s http://spark-c691.local:8000/health | jq
curl -s http://spark-c691.local:8000/ready | jq
curl -s http://spark-c691.local:8000/debug/device | jq
Uptime Monitoring
For Spark (local network), use a simple cron job or uptime check:
# Add to crontab on a machine that can reach Spark
*/5 * * * * curl -sf http://spark-c691.local:8000/health > /dev/null || echo "Spark down" | mail -s "Alert: Spark unhealthy" you@example.com
For HuggingFace Spaces:
- Use HuggingFace's built-in Space status monitoring
- Or set up an external uptime monitor (UptimeRobot, Pingdom, etc.) to check the Space URL
Frontend Status Indicator
In the app, show backend connection status based on /health and /ready:
- Connected (green):
/healthreturns 200,/readyreturns 200 - Loading (yellow):
/healthreturns 200,/readyreturns 503 - Unreachable (red):
/healthfails or times out
This gives users visibility into backend state without needing server-side monitoring.
Summary: Files to Create/Modify
Phase 0 (Secure GPU HF Space + Verify Basic Routing)
| File | Action |
|---|---|
visualisable-ai/lib/backend-auth.server.ts |
CREATE (getBaseAuthHeaders, getHfAuthHeader, isHfSpace) |
visualisable-ai/lib/backend-router.ts |
MODIFY (remove secrets from getBackendHeaders) |
| Vercel Environment | ADD HF_TOKEN (server-side only) |
| HuggingFace GPU Space | CONFIGURE (set to Private, configure sleep timeout) |
Phase 0.5 (Fix Critical API Routing + Prove GPU Routing)
| File | Action |
|---|---|
visualisable-ai/lib/backend-fetch.ts |
CREATE (per-user backend fetch helper) |
visualisable-ai/app/api/research/attention/analyze/route.ts |
MODIFY (use backendFetch) |
Phase 1 (Infrastructure)
| File | Action |
|---|---|
Dockerfile |
CREATE |
docker/compose.spark.yml |
CREATE |
.env.spark.example |
CREATE |
.gitignore |
MODIFY (add .env.spark, runs/) |
backend/model_service.py |
MODIFY (ensure /health is fast, add /ready, add /debug/device) |
Phase 2 (Devstral Backend Support)
| File | Action |
|---|---|
backend/model_adapter.py |
MODIFY (add MistralAdapter) |
backend/model_config.py |
MODIFY (add devstral-small) |
backend/model_service.py |
MODIFY (fix layer classification, wire env vars, add /models and /models/current endpoints) |
Phase 2b (Frontend Dynamic Handling)
| File | Action |
|---|---|
components/research/VerticalPipeline.tsx |
MODIFY (dynamic layers, vocab) |
components/research/SpreadsheetGrid.tsx |
MODIFY (dynamic head_dim, if applicable) |
Phase 2c (Frontend Routing + GPU HF Devstral)
| File | Action |
|---|---|
visualisable-ai/lib/backend-router.ts |
MODIFY (add Spark backend option) |
visualisable-ai/app/admin/users/page.tsx |
MODIFY (add Spark toggle) |
visualisable-ai/app/api/proxy/[...path]/route.ts |
MODIFY (use backendProxy + runtime='nodejs') |
visualisable-ai/app/api/backend/[...path]/route.ts |
MODIFY (use backendProxy + runtime='nodejs') |
visualisable-ai/app/api/demos/route.ts |
MODIFY (use backendFetch) |
visualisable-ai/app/api/demos/run/route.ts |
MODIFY (use backendFetch) |
visualisable-ai/app/api/vocabulary/*.ts |
MODIFY (use backendFetch) |
visualisable-ai/app/api/token/metadata/route.ts |
MODIFY (use backendFetch) |
(Note: /api/research/attention/analyze already updated in Phase 0.5) |
|
visualisable-ai/.env.local |
MODIFY (add NEXT_PUBLIC_SPARK_BACKEND_URL, NEXT_PUBLIC_MODE) |
visualisable-ai/.env.example |
MODIFY (document env vars, warn about NEXT_PUBLIC_MODE) |
visualisable-ai/components/TierIndicator.tsx |
MODIFY (optional: add Spark indicator) |
| GPU HF Space | CONFIGURE (DEFAULT_MODEL=devstral-small, upgrade to L40S/A100) |
Phase 3 (Spark Deployment)
| File | Action |
|---|---|
.env.spark |
MODIFY (change DEFAULT_MODEL to devstral-small, TORCH_DTYPE=bf16) |
Quick Checklist
Before marking each phase complete, verify:
Phase 0 (Secure GPU HF Space)
- GPU HF Space set to Private
-
HF_TOKENadded to Vercel (server-side only, noNEXT_PUBLIC_) -
lib/backend-auth.server.tscreated withgetBaseAuthHeaders(),getHfAuthHeader(),isHfSpace() -
getBackendHeaders()in backend-router.ts cleaned up (no secrets) - Sleep timeout configured (5 minutes)
- Direct unauthenticated request to GPU Space returns 401
Phase 0.5 (Fix Critical Routing)
-
lib/backend-fetch.tscreated withbackendFetch()(minimal helper) - At least one critical endpoint uses
backendFetch - GPU-enabled user's analyze request reaches GPU HF Space (verified)
- Free tier user's analyze request still goes to CPU HF Space
- (Note:
backendProxy()added later in Phase 2c for proxy routes)
Phase 1
-
/healthreturns fast (< 100ms) even while model is loading -
/readyendpoint exists and returns model load status -
.env.sparkis gitignored - Multi-branch guidance documented (ports + compose -p)
Phase 2
- MistralAdapter handles layer access correctly
- Layer classification uses percentages, not hardcoded indices
- Env vars (TORCH_DTYPE, MAX_CONTEXT, BATCH_SIZE) are wired into loader
-
requires_gpu: Truefor Devstral to guide users to Spark -
/modelsendpoint returns list of available models -
/models/currentendpoint returns currently loaded model info
Phase 2b
- Frontend stage boundaries are percentage-based
- Vocab size is dynamic, not hardcoded 51,200
- head_dim calculated from hidden_size/num_heads (if used)
Phase 2c
- All API routes use per-user routing (no hardcoded BACKEND_URL)
- All proxy routes have
export const runtime = 'nodejs' - GPU toggle correctly routes to GPU HuggingFace Space
- GPU HF Space has
DEFAULT_MODEL=devstral-smalland sufficient VRAM -
/models/currentendpoint exists and returns current model info - GPU Devstral proof:
/models/currentreturnsid=devstral-small, device=cuda, dtype=bf16 - GPU-enabled users automatically get Devstral in production
- Spark backend URL configurable via environment variable (local mode only)
-
NEXT_PUBLIC_MODEonly defined in.env.local, never in Vercel - Admin UI has Spark toggle (mutually exclusive with Remote, local mode only)
- Model selector shows available models based on connected backend
Phase 3
- TORCH_DTYPE=bf16 in .env.spark
- Model loads on GPU (check logs)
- Inference is GPU-accelerated (fast)
- Frontend renders 40 layers correctly
Current Status
- Phase 0: Secure GPU HF Space + verify basic routing ✅ COMPLETE
- Phase 0.5: Fix critical API route routing (prove GPU routing works) ✅ COMPLETE
- Phase 1: Deploy CodeGen to DGX Spark ⏸️ PAUSED (see blocker below)
- Phase 2: Add Devstral backend support ✅ COMPLETE
- MistralAdapter added for Mistral/Devstral architecture
- devstral-small config with 40 layers, GQA (32 Q heads, 8 KV heads)
- Model-specific dtype (recommended_dtype field: codegen→fp16, devstral→bf16)
- Percentage-based layer classification (works for any layer count)
- /models and /models/current endpoints added
- Environment variable support (DEFAULT_MODEL, TORCH_DTYPE, MAX_CONTEXT, BATCH_SIZE)
- Phase 2b: Frontend dynamic layer handling ✅ COMPLETE
- Percentage-based stage boundaries in VerticalPipeline
- Dynamic vocab size from modelInfo
- Dynamic head_dim derived from actual matrix data
- Removed hardcoded "64 dimensions" in tutorial
- Phase 2c: API route conversion + GPU HF Space ✅ COMPLETE
- All 8 API routes converted to use backendFetch helper
- Server-side auth with HF token for private Spaces
- Per-user backend routing working
- GPU HF Space configured: A100 (80GB), DEFAULT_MODEL=devstral-small
- ⏸️ Spark toggle deferred (no benefit until PyTorch supports GB10)
- Phase 3: Deploy Devstral to DGX Spark ⏸️ BLOCKED (PyTorch sm_121 support)
- Phase 4: Future enhancements (optional)
Blocker: DGX Spark GB10 GPU Not Yet Supported by PyTorch
Date: December 2024
Status: ⏸️ Phase 1 paused pending PyTorch update
The Issue
The DGX Spark uses an NVIDIA GB10 GPU (Grace Blackwell architecture) with compute capability sm_121. Current PyTorch releases (including NGC containers up to 24.08) do not include pre-built CUDA kernels for sm_121.
Error observed:
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call
Hardware details:
- DGX Spark hostname:
spark-c691.local - GPU: NVIDIA GB10 (sm_121 compute capability)
- CUDA driver: 13.0
- Architecture: ARM64 (aarch64)
What We Tried
- NGC PyTorch container 24.08-py3 - Does not include sm_121 kernels
- NGC PyTorch container 24.11-py3 - Python 3.12 compatibility issues with dependencies
- Standard PyTorch images - No ARM64 + CUDA 13.0 support
- CPU fallback - Works but defeats the purpose of using Spark
What We Learned
From the PyTorch forums:
- sm_121 is binary compatible with sm_120 - The warning/error is overly cautious
- A PR exists to add sm_121 support but missed PyTorch 2.9.0 release
- Workaround exists - Building PyTorch from source with sm_121 support works, but requires recompiling PyTorch, torchvision, and triton
Why We're Pausing (Not Workaround)
Running CodeGen on CPU on the Spark provides no benefit over:
- Mac Studio (512GB RAM) for local development
- HuggingFace Spaces (CPU and GPU options available)
The Spark deployment only makes sense when we can leverage the GB10 GPU. Building PyTorch from source is complex and fragile for a temporary workaround.
What's Ready on Spark
The following infrastructure is in place and ready to test once GPU support lands:
- Docker infrastructure:
docker/compose.spark.yml - Dockerfile:
docker/Dockerfile.spark(using NGC container) - Environment template:
.env.spark.example - SSH access configured with key-based auth
- Git clone at
/srv/visualisable/backend - Model cache directory:
/srv/models-cache/huggingface - Backend code has DEVICE env var override (for CPU fallback if needed)
-
/health,/ready,/debug/deviceendpoints added
Restart Instructions
When PyTorch officially supports sm_121 (expected in PyTorch 2.9.x patch or 2.10):
Check for updated NGC container:
# Look for NGC PyTorch containers with sm_121 support # https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorchUpdate Dockerfile.spark:
# Update to NGC container version with sm_121 support FROM nvcr.io/nvidia/pytorch:XX.XX-py3On Spark, pull and rebuild:
ssh dgxspark@spark-c691.local cd /srv/visualisable/backend git pull # Remove DEVICE=cpu from .env.spark (or comment it out) vim .env.spark # Rebuild with new NGC container docker compose -f docker/compose.spark.yml --env-file .env.spark up -d --buildVerify GPU is working:
# Should show cuda_available: true, model_device: cuda:0 curl -s http://spark-c691.local:8000/debug/device | python -m json.tool # Test inference curl -X POST http://spark-c691.local:8000/analyze/research/attention \ -H "Content-Type: application/json" \ -d '{"prompt": "def hello():", "max_tokens": 5}'Continue with Phase 1 validation criteria
Monitoring PyTorch Progress
- PyTorch GitHub: Watch for sm_121 PRs
- NGC Container releases: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
- PyTorch forums: https://discuss.pytorch.org/t/nvidia-dgx-spark-support/223677
Pre-Devstral Tag
Before making Phase 2 changes, both repos were tagged: pre-devstral-phase2-v1
To restore to this state if needed:
git checkout pre-devstral-phase2-v1
Phase 2 Completion Tags
After Phase 2/2b/2c completion (December 2024):
- Backend: Contains MistralAdapter, devstral-small config, /models endpoints
- Frontend: Contains dynamic layer handling, backendFetch conversion