seriffic Claude Sonnet 4.6 commited on
Commit
ab4f0a6
·
1 Parent(s): 5de71b8

docs: droplet runbook -- AMD MI300X recreation reference

Browse files

Live-introspected from droplet 569363721 (2026-05-06). Captures exact
docker run flags for vllm + riprap-models, ROCm/Docker apt sources,
firewall state, startup/restart behavior, secrets inventory (names
only), health-check commands, and a gaps list (update_hf_env.sh and
redeploy.sh are missing).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

Files changed (1) hide show
  1. docs/DROPLET-RUNBOOK.md +418 -0
docs/DROPLET-RUNBOOK.md ADDED
@@ -0,0 +1,418 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Droplet Runbook
2
+
3
+ _Last verified: 2026-05-06 (live introspection of droplet 569363721)_
4
+
5
+ ## Spec
6
+
7
+ | Field | Value |
8
+ |-------|-------|
9
+ | Provider | DigitalOcean GPU Droplet (AMD Developer Cloud) |
10
+ | Droplet ID | 569363721 |
11
+ | Size slug | `gpu-mi300x1-192gb` (from hostname `0.17.1-gpu-mi300x1-192gb-devcloud-atl1`) |
12
+ | Region | `atl1` (Atlanta) |
13
+ | OS | Ubuntu 24.04.4 LTS |
14
+ | Kernel | 6.8.0-106-generic |
15
+ | Disk | 697 GiB root, 112 GiB used at inspection |
16
+ | RAM | 235 GiB |
17
+ | Swap | None |
18
+ | GPU | AMD Instinct MI300X VF (gfx942, model 0x74b5) |
19
+ | VRAM | 192 GiB (205,822,885,888 bytes) |
20
+ | ROCm SMI | 4.0.0+fc0010cf6a |
21
+ | ROCm lib | 7.8.0 (installed via `repo.radeon.com/rocm/apt/7.2`) |
22
+ | Docker | CE 29.4.2 (from official `download.docker.com/linux/ubuntu`) |
23
+
24
+ ## Services
25
+
26
+ | Container | Image | Host Port | Container Port | Purpose |
27
+ |-----------|-------|-----------|----------------|---------|
28
+ | `vllm` | `vllm/vllm-openai-rocm:v0.17.1` | 8001 | 8000 | OpenAI-compatible LLM API (Granite 4.1 8B) |
29
+ | `riprap-models` | `riprap-models:latest` (local build) | 7860 | 7860 | GPU-specialist FastAPI service (Prithvi, TerraMind, GLiNER, Granite Embed, TTM) |
30
+
31
+ Both have `--restart unless-stopped`. Docker is systemd-enabled, so the full stack
32
+ auto-starts on reboot with no manual intervention.
33
+
34
+ A **Caddy** process runs natively (port 80, systemd service) configured to reverse-proxy
35
+ to `localhost:8888`. Nothing was listening on 8888 at inspection time — this appears to
36
+ be a leftover placeholder, not load-bearing for Riprap.
37
+
38
+ ## Existing provisioning scripts
39
+
40
+ | Script | What it does | Status |
41
+ |--------|--------------|--------|
42
+ | `scripts/deploy_droplet.sh` | Full bring-up: SSH verify, pull vLLM image, tar-stream + build riprap-models, start both containers, healthcheck. Idempotent — removes and recreates containers on re-run. | **Complete.** The canonical bring-up script. |
43
+ | `scripts/smoke_test_gpu.sh` | 4-check smoke: vLLM /v1/models, vLLM /v1/chat/completions, riprap-models /healthz, riprap-models /v1/granite-embed, /v1/gliner-extract. | **Complete.** Run after deploy to confirm the stack is live. |
44
+ | `scripts/save_droplet_image.sh` | Commits the running container, saves + compresses to a local tarball via scp. Useful as a fallback if the public-base Dockerfile rebuild fails. | Complete but **moot** once the bootstrap droplet is destroyed — requires a live droplet to extract from. |
45
+ | `scripts/probe_addresses.py` | End-to-end test against `/api/agent/stream` on the HF Space. 5/5 must pass before merging. | Not a droplet-setup script; it tests the full system end-to-end. |
46
+
47
+ **Gap:** No `update_hf_env.sh` exists. Updating HF Space env vars after a redeploy (new IP
48
+ or new token) is a manual `huggingface-cli space variables` command — see §Required
49
+ secrets below. This would be a good script to add.
50
+
51
+ **Gap:** No `redeploy.sh` wrapper exists. `deploy_droplet.sh` handles bring-up on a fresh
52
+ droplet but does not handle the HF Space variable update or the post-deploy probe run.
53
+ A `redeploy.sh` that chains `deploy_droplet.sh → huggingface-cli variables update →
54
+ probe_addresses.py` would complete the loop.
55
+
56
+ ## Recreation steps
57
+
58
+ ### 1. Provision the droplet
59
+
60
+ Use the DigitalOcean console or `doctl`. The exact size slug used was
61
+ `gpu-mi300x1-192gb`; pick `atl1` for the AMD Developer Cloud node type.
62
+
63
+ ```bash
64
+ doctl compute droplet create riprap-gpu \
65
+ --size gpu-mi300x1-192gb \
66
+ --region atl1 \
67
+ --image ubuntu-24-04-x64 \
68
+ --ssh-keys <your-key-id>
69
+ ```
70
+
71
+ Confirm `/dev/kfd` and `/dev/dri` are present before continuing:
72
+
73
+ ```bash
74
+ ssh root@<new-ip> "ls /dev/kfd /dev/dri"
75
+ ```
76
+
77
+ > **Note:** The AMD Developer Cloud GPU droplet image pre-installs ROCm and Docker.
78
+ > Steps 2–3 below document what was observed on the live system. On a fresh image from
79
+ > DigitalOcean's AMD GPU catalog they may already be satisfied — verify before running.
80
+
81
+ ### 2. ROCm install
82
+
83
+ ROCm 7.2 was installed via the AMD repo. The following sources were present in
84
+ `/etc/apt/sources.list.d/`:
85
+
86
+ ```
87
+ # /etc/apt/sources.list.d/rocm.list
88
+ deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/rocm/apt/7.2 noble main
89
+
90
+ # /etc/apt/sources.list.d/amdgpu.list
91
+ deb [arch=amd64,i386 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/amdgpu/30.30/ubuntu noble main
92
+
93
+ # /etc/apt/sources.list.d/device-metrics-exporter.list
94
+ deb [arch=amd64 signed-by=/etc/apt/keyrings/rocm.gpg] https://repo.radeon.com/device-metrics-exporter/apt/1.4.0 noble main
95
+ ```
96
+
97
+ Key packages confirmed installed (versions at inspection):
98
+
99
+ ```
100
+ amdgpu-dkms 1:6.16.13.30300000-2278356.24.04
101
+ amdgpu-core 1:7.2.70200-2278374.24.04
102
+ hip-runtime-amd 7.2.26015.70200-43~24.04
103
+ hipblas 3.2.0.70200-43~24.04
104
+ hipblaslt 1.2.1.70200-43~24.04
105
+ hipcc 1.1.1.70200-43~24.04
106
+ hipfft 1.0.22.70200-43~24.04
107
+ hiprand 3.1.0.70200-43~24.04
108
+ hipsolver 3.2.0.70200-43~24.04
109
+ hipsparse 4.2.0.70200-43~24.04
110
+ ```
111
+
112
+ **Gap:** The exact `amdgpu-install` invocation used to bootstrap the host ROCm install
113
+ was not captured (the AMD GPU droplet image likely pre-installs it via cloud-init).
114
+ If building on a bare Ubuntu 24.04 node, follow the [official ROCm 7.2 install guide](https://rocm.docs.amd.com/en/docs-7.2.0/deploy/linux/quick_start.html).
115
+
116
+ ### 3. Docker install
117
+
118
+ Docker CE was installed from the official Docker apt repo:
119
+
120
+ ```
121
+ # /etc/apt/sources.list.d/docker.list
122
+ deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu noble stable
123
+ ```
124
+
125
+ Packages installed:
126
+
127
+ ```
128
+ docker-ce 5:29.4.2-2~ubuntu.24.04~noble
129
+ docker-ce-cli 5:29.4.2-2~ubuntu.24.04~noble
130
+ docker-buildx-plugin 0.33.0-1~ubuntu.24.04~noble
131
+ docker-compose-plugin 5.1.3-1~ubuntu.24.04~noble
132
+ ```
133
+
134
+ Docker is **systemd-enabled** — starts automatically on reboot.
135
+
136
+ Standard install steps if needed:
137
+
138
+ ```bash
139
+ install -m 0755 -d /etc/apt/keyrings
140
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
141
+ | gpg --dearmor -o /etc/apt/keyrings/docker.asc
142
+ chmod a+r /etc/apt/keyrings/docker.asc
143
+ echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] \
144
+ https://download.docker.com/linux/ubuntu noble stable" \
145
+ > /etc/apt/sources.list.d/docker.list
146
+ apt-get update
147
+ apt-get install -y docker-ce docker-ce-cli docker-compose-plugin
148
+ systemctl enable --now docker
149
+ ```
150
+
151
+ ### 4. Pull and launch vLLM
152
+
153
+ The full `docker run` reconstructed from live `docker inspect`:
154
+
155
+ ```bash
156
+ TOKEN=<your-bearer-token>
157
+ HF_CACHE=/root/hf-cache
158
+
159
+ mkdir -p "$HF_CACHE"
160
+
161
+ docker run -d --name vllm \
162
+ --device=/dev/kfd \
163
+ --device=/dev/dri \
164
+ --group-add video \
165
+ --ipc=host \
166
+ --shm-size=16g \
167
+ -p 8001:8000 \
168
+ -v "${HF_CACHE}:/root/.cache/huggingface" \
169
+ -e GLOO_SOCKET_IFNAME=eth0 \
170
+ -e VLLM_HOST_IP=127.0.0.1 \
171
+ --restart unless-stopped \
172
+ vllm/vllm-openai-rocm:v0.17.1 \
173
+ --model ibm-granite/granite-4.1-8b \
174
+ --host 0.0.0.0 \
175
+ --port 8000 \
176
+ --api-key "$TOKEN" \
177
+ --max-model-len 8192 \
178
+ --served-model-name granite-4.1-8b
179
+ ```
180
+
181
+ **Observed startup behavior (from logs):**
182
+ - Architecture resolved as `GraniteForCausalLM` (vanilla decoder, no hybrid Mamba)
183
+ - dtype: `torch.bfloat16`
184
+ - tensor_parallel_size: 1, pipeline_parallel_size: 1, data_parallel_size: 1
185
+ - prefix caching: enabled, chunked prefill: enabled
186
+ - Model load: ~24 s, 16.46 GiB memory
187
+ - Graph capture: ~8 s, 0.45 GiB additional
188
+ - Total cold init: ~35 s from container start to API ready
189
+ - CUDA graph sizes: 51 sizes up to 512 tokens
190
+ - First-request ROCm kernel JIT can add 30–50 s; subsequent requests are 30–50× faster
191
+
192
+ **`GLOO_SOCKET_IFNAME=eth0` is required.** Without it gloo fails to bind and the engine
193
+ core never initialises. Do not remove this env var.
194
+
195
+ ### 5. Build and launch riprap-models
196
+
197
+ Build the image from the repo source (do this from your local machine; `deploy_droplet.sh`
198
+ handles the tar-stream automatically):
199
+
200
+ ```bash
201
+ # On the droplet after source is synced to /workspace/riprap-build:
202
+ cd /workspace/riprap-build && \
203
+ docker build \
204
+ -t riprap-models:latest \
205
+ -f services/riprap-models/Dockerfile \
206
+ .
207
+ ```
208
+
209
+ Full `docker run` reconstructed from live `docker inspect`:
210
+
211
+ ```bash
212
+ TOKEN=<your-bearer-token> # same token as vLLM
213
+ HF_CACHE=/root/hf-cache
214
+
215
+ docker run -d --name riprap-models \
216
+ --device=/dev/kfd \
217
+ --device=/dev/dri \
218
+ --group-add video \
219
+ --ipc=host \
220
+ --shm-size=8g \
221
+ -p 7860:7860 \
222
+ -v "${HF_CACHE}:/root/.cache/huggingface" \
223
+ -e RIPRAP_MODELS_API_KEY="$TOKEN" \
224
+ --restart unless-stopped \
225
+ riprap-models:latest
226
+ ```
227
+
228
+ Entrypoint: `uvicorn main:app --host 0.0.0.0 --port 7860 --log-level info --proxy-headers`
229
+
230
+ **Key environment variables baked into the image** (not injected at runtime, no override needed):
231
+
232
+ ```
233
+ ROCM_PATH=/opt/rocm
234
+ LD_LIBRARY_PATH=/opt/rocm/lib:/usr/local/lib:
235
+ PYTORCH_ROCM_ARCH=gfx942
236
+ AITER_ROCM_ARCH=gfx942;gfx950
237
+ MORI_GPU_ARCHS=gfx942;gfx950
238
+ HSA_NO_SCRATCH_RECLAIM=1
239
+ TOKENIZERS_PARALLELISM=false
240
+ SAFETENSORS_FAST_GPU=1
241
+ HIP_FORCE_DEV_KERNARG=1
242
+ HF_HOME=/root/.cache/huggingface
243
+ TRANSFORMERS_CACHE=/root/.cache/huggingface
244
+ ```
245
+
246
+ **Python packages confirmed on running container** (at inspection):
247
+
248
+ | Package | Version |
249
+ |---------|---------|
250
+ | torch | 2.10.0 (ROCm build) |
251
+ | transformers | 4.57.6 |
252
+ | terratorch | 1.2.7 |
253
+ | torchgeo | 0.9.0 |
254
+ | torchvision | 0.24.1+d801a34 |
255
+ | torchaudio | 2.9.0+eaa9e4e |
256
+ | granite-tsfm | 0.3.6 |
257
+ | gliner | 0.2.26 |
258
+ | sentence-transformers | 5.4.1 |
259
+ | timm | 1.0.25 |
260
+ | safetensors | 0.8.0rc0 |
261
+ | segmentation_models_pytorch | 0.5.0 |
262
+ | pytorch-lightning | 2.6.1 |
263
+ | huggingface_hub | 0.36.2 |
264
+
265
+ > **`safetensors==0.8.0rc0` is a release candidate.** If the Dockerfile build fails on
266
+ > a fresh droplet with a pip resolution error on this package, bump it to the nearest
267
+ > stable release in `services/riprap-models/requirements-full.txt`.
268
+
269
+ **test_transform patch:** The v2 datamodule `test_transform` patch was confirmed present
270
+ in the running container at `/app/vllm/examples/pooling/plugin/prithvi_geospatial_mae_offline.py`.
271
+
272
+ **First-request model download:** The HF cache at `/root/hf-cache` is a bind mount that
273
+ survives container recreation. On a fresh droplet with an empty cache, the first request
274
+ to each specialist triggers a ~12 GB model download. Steady-state requests reuse the
275
+ cached weights.
276
+
277
+ ### 6. Firewall
278
+
279
+ UFW was active at inspection. The relevant rules:
280
+
281
+ ```bash
282
+ ufw limit 22/tcp # SSH: rate-limited
283
+ ufw allow 80/tcp # Caddy (reverse proxy placeholder)
284
+ ufw allow 443 # HTTPS (currently unused)
285
+ ufw deny 6601 # Explicit block
286
+ ufw deny 50061 # Explicit block
287
+ ```
288
+
289
+ UFW **default is allow incoming**, so ports 8001 (vLLM) and 7860 (riprap-models) are
290
+ reachable from the public internet without an explicit allow rule. If you want to
291
+ restrict access to the HF Space only, add:
292
+
293
+ ```bash
294
+ # Allow only HF Space egress IPs (check current HF IP ranges first)
295
+ ufw default deny incoming
296
+ ufw allow from <hf-space-ip-range> to any port 8001
297
+ ufw allow from <hf-space-ip-range> to any port 7860
298
+ ufw allow 22/tcp
299
+ ```
300
+
301
+ ### 7. Startup behavior
302
+
303
+ **The stack auto-starts on reboot with no manual intervention:**
304
+
305
+ - `dockerd` is managed by systemd (`systemctl is-enabled docker → enabled`)
306
+ - Both `vllm` and `riprap-models` containers have `RestartPolicy: unless-stopped`
307
+ - On reboot: systemd starts Docker → Docker restarts both containers automatically
308
+
309
+ **After a manual `docker stop` (e.g., for maintenance):** The containers will NOT
310
+ auto-start because `unless-stopped` respects explicit stops. Restart manually:
311
+
312
+ ```bash
313
+ docker start vllm riprap-models
314
+ ```
315
+
316
+ **After a full reboot or Docker daemon restart:** Auto-start kicks in — no action needed.
317
+
318
+ **vLLM cold-start warning:** After any restart, vLLM takes ~35 s to become ready
319
+ (`/v1/models` returns 200). ROCm kernel compilation adds another 30–50 s of latency on
320
+ the very first inference request. The HF Space will see timeouts during this window.
321
+ The `deploy_droplet.sh` healthcheck loop waits up to 90 s for vLLM to become ready.
322
+
323
+ ## Required secrets
324
+
325
+ The stack uses a single shared bearer token for both services:
326
+
327
+ | Env var / flag | Container | Set where |
328
+ |----------------|-----------|-----------|
329
+ | `--api-key <TOKEN>` | `vllm` | Passed in `docker run` command (visible in `docker inspect`) |
330
+ | `RIPRAP_MODELS_API_KEY=<TOKEN>` | `riprap-models` | Passed in `docker run -e` flag (visible in `docker inspect`) |
331
+
332
+ **No `.env` file exists at `/root/.env` or `/etc/riprap*`.** The token is stored only
333
+ in the running container configuration. To see the live token without SSHing:
334
+
335
+ ```bash
336
+ ssh root@<droplet-ip> "docker inspect riprap-models | python3 -c \
337
+ \"import sys,json; c=json.load(sys.stdin)[0]; \
338
+ [print(e) for e in c['Config']['Env'] if 'API_KEY' in e]\""
339
+ ```
340
+
341
+ **The HF Space must also know the token and the droplet's IP.** Set these Space
342
+ variables after every redeploy (new droplet = new IP and new token):
343
+
344
+ ```bash
345
+ VLLM_PORT=8001
346
+ MODELS_PORT=7860
347
+ NEW_IP=<new-droplet-ip>
348
+ TOKEN=<new-bearer-token>
349
+
350
+ huggingface-cli space variables \
351
+ lablab-ai-amd-developer-hackathon/riprap-nyc \
352
+ RIPRAP_LLM_PRIMARY=vllm \
353
+ RIPRAP_LLM_BASE_URL="http://${NEW_IP}:${VLLM_PORT}/v1" \
354
+ RIPRAP_LLM_API_KEY="$TOKEN" \
355
+ RIPRAP_ML_BACKEND=remote \
356
+ RIPRAP_ML_BASE_URL="http://${NEW_IP}:${MODELS_PORT}" \
357
+ RIPRAP_ML_API_KEY="$TOKEN"
358
+
359
+ huggingface-cli space restart lablab-ai-amd-developer-hackathon/riprap-nyc
360
+ ```
361
+
362
+ ## Health check
363
+
364
+ Two curl commands that confirm both services are live:
365
+
366
+ ```bash
367
+ TOKEN=<your-bearer-token>
368
+ IP=134.199.193.99 # replace with new IP after redeploy
369
+
370
+ # vLLM — should return JSON with granite-4.1-8b in the model list
371
+ curl -s -H "Authorization: Bearer $TOKEN" \
372
+ "http://${IP}:8001/v1/models" | python3 -m json.tool
373
+
374
+ # riprap-models — should return {"ok": true, ...}
375
+ curl -s "http://${IP}:7860/healthz"
376
+ ```
377
+
378
+ For a deeper check run the smoke-test script:
379
+
380
+ ```bash
381
+ bash scripts/smoke_test_gpu.sh "$IP" "$TOKEN"
382
+ # Want: 4 PASS, 0 FAIL
383
+ ```
384
+
385
+ For a full end-to-end check via the HF Space:
386
+
387
+ ```bash
388
+ .venv/bin/python scripts/probe_addresses.py \
389
+ --base https://lablab-ai-amd-developer-hackathon-riprap-nyc.hf.space
390
+ # Want: 5/5 PASS
391
+ ```
392
+
393
+ ## Gaps in existing scripts
394
+
395
+ | Missing script | What it needs to do |
396
+ |----------------|---------------------|
397
+ | `scripts/update_hf_env.sh` | Accept `<ip> <token>` args, run `huggingface-cli space variables` to update `RIPRAP_LLM_BASE_URL`, `RIPRAP_LLM_API_KEY`, `RIPRAP_ML_BASE_URL`, `RIPRAP_ML_API_KEY`, then restart the Space. Called as the last step after a successful `deploy_droplet.sh`. |
398
+ | `scripts/redeploy.sh` | Thin orchestrator: generate a fresh token, call `deploy_droplet.sh <ip> <token>`, then call `update_hf_env.sh <ip> <token>`, then run `probe_addresses.py` against the live Space to confirm 5/5. Reduces a 4-step redeploy to one command. |
399
+
400
+ `save_droplet_image.sh` is complete but only useful while a working droplet is alive.
401
+ The bootstrap droplet was destroyed 2026-05-06; this script cannot recover from that.
402
+
403
+ ## Destroy checklist
404
+
405
+ - [ ] Note the current `RIPRAP_MODELS_API_KEY` / vLLM `--api-key` value (or accept that
406
+ you'll generate a fresh one on the next bring-up and update HF Space variables)
407
+ - [ ] Confirm the three NYC fine-tune artefacts exist on HF Hub (they do):
408
+ `msradam/TerraMind-NYC-Adapters`, `msradam/Prithvi-EO-2.0-NYC-Pluvial`,
409
+ `msradam/Granite-TTM-r2-Battery-Surge`
410
+ - [ ] Confirm no model weights exist only on the droplet — all are fetched from HF Hub
411
+ on first request; the `/root/hf-cache` bind mount does NOT survive droplet deletion
412
+ - [ ] Run `bash scripts/smoke_test_gpu.sh <ip> <token>` one final time; record result
413
+ - [ ] Run `python scripts/probe_addresses.py` one final time; record result
414
+ - [ ] Update HF Space env vars to point at a new droplet OR confirm the Space gracefully
415
+ falls back to Ollama (pill will turn amber)
416
+ - [ ] `doctl compute droplet delete 569363721` or destroy via DO console
417
+ - [ ] Verify HF Space is still serving after destroy:
418
+ `curl -sf https://lablab-ai-amd-developer-hackathon-riprap-nyc.hf.space/api/backend`