Remove old blog file
Browse files- The Digital Traffic Jam.md +0 -237
The Digital Traffic Jam.md
DELETED
|
@@ -1,237 +0,0 @@
|
|
| 1 |
-
# The Digital Traffic Jam: How We Gave Linux a 160-IQ Brain
|
| 2 |
-
|
| 3 |
-
*Built for the Meta PyTorch OpenEnv Hackathon 2026*
|
| 4 |
-
|
| 5 |
-
---
|
| 6 |
-
|
| 7 |
-
## 1. The Spinning Wheel of Death
|
| 8 |
-
|
| 9 |
-
You know the feeling. You're in a clutch gaming moment — or maybe you're screen-sharing on a 100-person Zoom call — and **BAM**. Everything freezes. The cursor stutters. The audio crackles. You stare at a spinning wheel, contemplating your life choices.
|
| 10 |
-
|
| 11 |
-
Here's the dirty secret: **your computer probably has plenty of power.** 64GB of RAM, 16 cores, an NVMe drive that could melt steel. So why does it still lag?
|
| 12 |
-
|
| 13 |
-
Because deep inside your operating system, there's a **waiter** running a 1,000-table restaurant with a 20-year-old rule book.
|
| 14 |
-
|
| 15 |
-
That waiter is the **Linux Completely Fair Scheduler (CFS)**. And "fair" doesn't mean "fast."
|
| 16 |
-
|
| 17 |
-
---
|
| 18 |
-
|
| 19 |
-
## 2. "Fair" Isn't Always "Fast"
|
| 20 |
-
|
| 21 |
-
Think of CFS like a traffic light at a busy intersection. It gives every direction an equal turn — 2 minutes of green, regardless of whether there are 50 cars waiting or zero.
|
| 22 |
-
|
| 23 |
-
That's *fair*. But it's also *stupid*.
|
| 24 |
-
|
| 25 |
-
Your PostgreSQL database needs the CPU **right now** because 10,000 users are waiting for a query result. But CFS gives equal time to a background log rotation that nobody cares about. Your latency-sensitive video call gets the same priority as a cron job checking disk space at 3 AM.
|
| 26 |
-
|
| 27 |
-
The rules are **static**. They don't learn. They don't adapt. They don't know that YOUR workload is different from everyone else's.
|
| 28 |
-
|
| 29 |
-
**Our mission was simple:** Fire the old rulebook. Hire an AI strategist that can *see the traffic coming* and change the lights in real-time.
|
| 30 |
-
|
| 31 |
-
---
|
| 32 |
-
|
| 33 |
-
## 3. Meet KernelX: The Super-Intern
|
| 34 |
-
|
| 35 |
-
KernelX is a **living, breathing scheduling policy** for Linux. Not just code — a system that watches, learns, and adapts.
|
| 36 |
-
|
| 37 |
-
### For the Non-Techie
|
| 38 |
-
|
| 39 |
-
Imagine you hired a brilliant intern to sit next to the restaurant waiter. This intern has a photographic memory — they remember every order, every delay, every complaint. After watching for a while, they start whispering suggestions:
|
| 40 |
-
|
| 41 |
-
> *"Hey, Table 7 has been waiting 10 minutes. Skip the dessert for Table 3 — they're fine — and rush that burger."*
|
| 42 |
-
|
| 43 |
-
That's KernelX. A brainy sidekick that watches how your apps behave and **nudges** the important ones to the front of the line.
|
| 44 |
-
|
| 45 |
-
### For the Techie (The Secret Sauce)
|
| 46 |
-
|
| 47 |
-
KernelX is an **eBPF-instrumented, LLM-powered, closed-loop kernel scheduling optimizer**. Here's the stack:
|
| 48 |
-
|
| 49 |
-
```
|
| 50 |
-
Linux Kernel (eBPF sentinel captures 24D telemetry at every sched_switch)
|
| 51 |
-
│
|
| 52 |
-
▼
|
| 53 |
-
Rust Bridge (ring buffer → shared memory + trajectory JSONL, <1ms latency)
|
| 54 |
-
│
|
| 55 |
-
▼
|
| 56 |
-
Python Brain (SmolLM2-360M-Instruct, quantized to GGUF Q4_K_M, 44ms inference)
|
| 57 |
-
│
|
| 58 |
-
▼
|
| 59 |
-
Scheduling Action [-1.0 to +1.0] → ZMQ → Bridge → eBPF priority_actions map
|
| 60 |
-
│
|
| 61 |
-
▼
|
| 62 |
-
Kernel applies the nudge at the very next context switch
|
| 63 |
-
```
|
| 64 |
-
|
| 65 |
-
The model uses **GRPO (Group Relative Policy Optimization)** — think of it as competitive learning. We show the AI multiple ways to handle traffic, and it gets a "reward" when latency goes down and a "penalty" when it makes things worse. Over time, it learns to *see around corners*.
|
| 66 |
-
|
| 67 |
-
---
|
| 68 |
-
|
| 69 |
-
## 4. The Workout Loop: Collect, Train, Repeat
|
| 70 |
-
|
| 71 |
-
This is the Rocky montage for your CPU.
|
| 72 |
-
|
| 73 |
-
### The Game Tape (Collect)
|
| 74 |
-
|
| 75 |
-
The eBPF sentinel records every context switch with a 24-dimensional feature vector: CPU core, process priority, virtual runtime, wait time, context switch count, CPU migrations, and more. We collected **534,134 transitions** from a real Linux machine under mixed workloads.
|
| 76 |
-
|
| 77 |
-
But we're not drowning in data — the Rust bridge is selective. It only saves:
|
| 78 |
-
- **High-pain events**: wait time > 500μs (the moments that matter)
|
| 79 |
-
- **10% random sample**: for baseline comparison
|
| 80 |
-
|
| 81 |
-
This cuts data volume by **95%** while keeping every important "learning moment."
|
| 82 |
-
|
| 83 |
-
### The Study Session (Train)
|
| 84 |
-
|
| 85 |
-
We fed that data into SmolLM2-360M using a two-phase approach:
|
| 86 |
-
|
| 87 |
-
**Phase 1 — SFT Warm-Start**: Taught the model the format. "When you see high latency, output a negative number (boost priority). When things are calm, output near-zero (hands off)." Think of it as giving the intern the employee handbook.
|
| 88 |
-
|
| 89 |
-
**Phase 2 — GRPO Reinforcement Learning**: The real magic. The model generates scheduling decisions, sees what actually happened in the kernel, and adjusts. It learns things we never programmed:
|
| 90 |
-
|
| 91 |
-
> One unexpected discovery: the model learned to slightly *demote* processes with very low wait times and high exec_runtime — these were CPU hogs that weren't hurting but were monopolizing the scheduler's attention. By gently deprioritizing them, overall system responsiveness improved.
|
| 92 |
-
|
| 93 |
-
### The Instant Upgrade (Deploy)
|
| 94 |
-
|
| 95 |
-
And here's the coolest part: **we can hot-swap the AI's brain while the system is running.** One API call:
|
| 96 |
-
|
| 97 |
-
```
|
| 98 |
-
POST /reload-policy?model_path=/path/to/new/model.gguf
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
No rebooting. No downtime. The kernel just starts getting smarter *while you're using it*.
|
| 102 |
-
|
| 103 |
-
---
|
| 104 |
-
|
| 105 |
-
## 5. Shrinking a Library into a Pocketbook
|
| 106 |
-
|
| 107 |
-
The raw model is 1.4GB. That's too fat for real-time kernel scheduling.
|
| 108 |
-
|
| 109 |
-
Enter **4-bit quantization (GGUF Q4_K_M)**. We shrank the model from 1.4GB down to **258MB** — like compressing an entire library into a pocketbook that fits in the kernel's back pocket.
|
| 110 |
-
|
| 111 |
-
The result:
|
| 112 |
-
- **44ms inference** on a laptop CPU (warm cache)
|
| 113 |
-
- **Sub-50ms target achieved** — the AI thinks faster than you can blink
|
| 114 |
-
- The model doesn't *become* the lag it's trying to fix
|
| 115 |
-
|
| 116 |
-
---
|
| 117 |
-
|
| 118 |
-
## 6. The Results: "Is That Even Legal?"
|
| 119 |
-
|
| 120 |
-
### Training Convergence
|
| 121 |
-
|
| 122 |
-
| Metric | Before Training | After Training | Change |
|
| 123 |
-
|--------|----------------|----------------|--------|
|
| 124 |
-
| Training Loss | 2.05 | 0.28 | **-86%** |
|
| 125 |
-
| Token Accuracy | 61% | 91% | **+49%** |
|
| 126 |
-
| Format Compliance | 0% | 100% | **Perfect** |
|
| 127 |
-
| Model Size | 1,400 MB | 258 MB | **-82%** |
|
| 128 |
-
| Inference Latency | ∞ | 44ms | **Real-time** |
|
| 129 |
-
|
| 130 |
-
### The Before vs. After
|
| 131 |
-
|
| 132 |
-
In simulation on real kernel telemetry:
|
| 133 |
-
|
| 134 |
-
| Strategy | Avg Latency | Latency Reduction | Reward |
|
| 135 |
-
|----------|-------------|-------------------|--------|
|
| 136 |
-
| **Linux CFS (Default)** | Baseline | — | Baseline |
|
| 137 |
-
| **Hand-Written Heuristic** | -15% | 15% better | +2% |
|
| 138 |
-
| **KernelX AI Strategist** | **-25%** | **25% better** | **+8%** |
|
| 139 |
-
|
| 140 |
-
For the non-techie: imagine your 1-hour commute becoming a 45-minute drive. That's what we did for your data — and with more GRPO iterations on live data, the improvement compounds.
|
| 141 |
-
|
| 142 |
-
### The Moment It Clicked
|
| 143 |
-
|
| 144 |
-
The chart that made us jump out of our chairs:
|
| 145 |
-
|
| 146 |
-
The training loss fell from 2.05 to 0.28 in the first epoch — the model was *inhaling* the kernel's patterns. By the time accuracy hit 91%, it was generating valid scheduling actions for states it had never seen before.
|
| 147 |
-
|
| 148 |
-
---
|
| 149 |
-
|
| 150 |
-
## 7. The "Ooooh, Shiny!" Bits
|
| 151 |
-
|
| 152 |
-
### The 24D Telemetry Vector
|
| 153 |
-
|
| 154 |
-
Every context switch gives us 24 dimensions of kernel truth. But most of them are noise. Our preprocessing pipeline applies **symmetric log scaling** (compressing trillion-scale vruntime values to ~29) and drops the 14 zero/placeholder features, leaving a crisp 10D representation:
|
| 155 |
-
|
| 156 |
-
```
|
| 157 |
-
cpu:10 | prio:120 | exec_ns:22.27 | vrt:28.78 | migr:8.98 | cpus:16 | csw:1 | wt_us:17
|
| 158 |
-
```
|
| 159 |
-
|
| 160 |
-
Token-efficient. Human-readable. LLM-friendly.
|
| 161 |
-
|
| 162 |
-
### The Reward Function
|
| 163 |
-
|
| 164 |
-
We don't just say "reduce latency." We decompose the reward into three competing objectives:
|
| 165 |
-
|
| 166 |
-
$$R_t = \alpha \cdot \log(\Delta_{exec} + 1) - \beta \cdot \Delta_{wait} - \gamma \cdot |a_t - a_{t-1}|$$
|
| 167 |
-
|
| 168 |
-
- **Throughput** (α=1.0): Did the process make CPU progress?
|
| 169 |
-
- **Latency** (β=2.0): Did wait time increase? *Heavy penalty.*
|
| 170 |
-
- **Stability** (γ=0.5): Did the action jitter from last time? *Don't oscillate.*
|
| 171 |
-
|
| 172 |
-
This forces the model to balance speed, responsiveness, and smoothness — just like a real scheduler should.
|
| 173 |
-
|
| 174 |
-
### The Terminal Dashboard
|
| 175 |
-
|
| 176 |
-
Not just numbers in a log file. A btop-inspired Ratatui TUI shows everything in real-time:
|
| 177 |
-
- CPU core utilization with color-coded bars
|
| 178 |
-
- P99 latency gauge (green → yellow → red)
|
| 179 |
-
- AI decision panel with action value, confidence, and target PID
|
| 180 |
-
- Reward curve sparkline
|
| 181 |
-
- Connection status indicators (SHM / Bridge / Brain)
|
| 182 |
-
- Full 24D telemetry grid with compact number formatting
|
| 183 |
-
|
| 184 |
-
It reads from the same shared memory as the brain — zero overhead.
|
| 185 |
-
|
| 186 |
-
---
|
| 187 |
-
|
| 188 |
-
## 8. The OpenEnv Contract
|
| 189 |
-
|
| 190 |
-
KernelX isn't a demo hack — it's a proper OpenEnv environment. Judges (and future researchers) can:
|
| 191 |
-
|
| 192 |
-
```python
|
| 193 |
-
env.reset() # Start a scheduling episode
|
| 194 |
-
obs = env.step(action=0.5) # Apply a demote action, observe result
|
| 195 |
-
env.state # Check episode progress
|
| 196 |
-
env.stop() # End episode, get final score
|
| 197 |
-
```
|
| 198 |
-
|
| 199 |
-
The environment runs as a FastAPI server. Connect any RL training loop — TRL, Stable Baselines, custom GRPO — and train a better scheduler.
|
| 200 |
-
|
| 201 |
-
---
|
| 202 |
-
|
| 203 |
-
## 9. What We'd Do with More Time
|
| 204 |
-
|
| 205 |
-
- **Reward Normalization**: Our GRPO hit gradient explosion because wait_delta can be 89,000μs. Clipping the latency penalty would stabilize training.
|
| 206 |
-
- **PMU Features**: 14 of our 24 feature slots are reserved for hardware performance counters (IPC, cache misses, branch mispredictions). Populating these via `perf_event_open` would give the model much richer state.
|
| 207 |
-
- **Multi-Process Reasoning**: Currently the model acts on one PID. A multi-agent extension could reason about process *interactions* — "PostgreSQL is blocking on I/O, so boost the filesystem daemon."
|
| 208 |
-
- **Personalized OS**: The long-term vision? An operating system that *knows you*. If you're a video editor, it becomes a workstation. If you're a gamer, it becomes a console. All automatically, all learned.
|
| 209 |
-
|
| 210 |
-
---
|
| 211 |
-
|
| 212 |
-
## 10. We Didn't Just Fix the Traffic Jam
|
| 213 |
-
|
| 214 |
-
We taught the road how to build itself.
|
| 215 |
-
|
| 216 |
-
KernelX proves that a small language model (360M parameters, 258MB quantized) can make meaningful real-time scheduling decisions at kernel speed. It's not replacing CFS — it's *augmenting* it with learned intelligence.
|
| 217 |
-
|
| 218 |
-
The eBPF sentinel sees what's happening. The Rust bridge moves data at memory speed. The LLM thinks in 44 milliseconds. And the kernel acts.
|
| 219 |
-
|
| 220 |
-
**Your computer just got a 160-IQ brain.**
|
| 221 |
-
|
| 222 |
-
---
|
| 223 |
-
|
| 224 |
-
## Links
|
| 225 |
-
|
| 226 |
-
| Resource | URL |
|
| 227 |
-
|----------|-----|
|
| 228 |
-
| Live Demo (Simulation) | [huggingface.co/spaces/Rayugacodes/KernelX](https://huggingface.co/spaces/Rayugacodes/KernelX) |
|
| 229 |
-
| Trained Model | [huggingface.co/Rayugacodes/kernelx-strategist](https://huggingface.co/Rayugacodes/kernelx-strategist) |
|
| 230 |
-
| Training Data (534K transitions) | [huggingface.co/datasets/Rayugacodes/kernelx-training-data](https://huggingface.co/datasets/Rayugacodes/kernelx-training-data) |
|
| 231 |
-
| Colab Training Notebook | [KernelX_Training.ipynb](https://colab.research.google.com/github/pie-314/KernelX/blob/model-training-hugging-face-integration/KernelX_Training.ipynb) |
|
| 232 |
-
| Source Code | [github.com/pie-314/KernelX](https://github.com/pie-314/KernelX) |
|
| 233 |
-
|
| 234 |
-
---
|
| 235 |
-
|
| 236 |
-
*KernelX — Meta PyTorch OpenEnv Hackathon 2026*
|
| 237 |
-
*Team: Naman Gupta & Team*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|