v0.5.0 chat-tuned ghost-small (CTIBench MCQ 36.9%)
Browse files- README.md +351 -0
- config.json +33 -0
- pytorch_model.pt +3 -0
- tokenizer_config.json +5 -0
README.md
ADDED
|
@@ -0,0 +1,351 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: mit
|
| 5 |
+
library_name: pytorch
|
| 6 |
+
tags:
|
| 7 |
+
- cybersecurity
|
| 8 |
+
- transformer
|
| 9 |
+
- language-model
|
| 10 |
+
- decoder-only
|
| 11 |
+
- from-scratch
|
| 12 |
+
- cve
|
| 13 |
+
- ctf
|
| 14 |
+
- security
|
| 15 |
+
datasets:
|
| 16 |
+
- custom
|
| 17 |
+
pipeline_tag: text-generation
|
| 18 |
+
model-index:
|
| 19 |
+
- name: ghost-tiny
|
| 20 |
+
results: []
|
| 21 |
+
- name: ghost-small
|
| 22 |
+
results: []
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
# GhostLM β Cybersecurity Language Model
|
| 26 |
+
|
| 27 |
+
## Model Details
|
| 28 |
+
|
| 29 |
+
| Field | Value |
|
| 30 |
+
|---|---|
|
| 31 |
+
| **Model Names** | `ghostlm/ghost-small` (~45M params, current canonical). `ghostlm/ghost-tiny` (14.7M, historical canonical and better PMI-suite scorer). Future: `ghost-base`, `ghost-1B` |
|
| 32 |
+
| **Architecture** | Decoder-only transformer |
|
| 33 |
+
| **Author** | [Joe Munene](https://github.com/joemunene-by) |
|
| 34 |
+
| **License** | MIT |
|
| 35 |
+
| **Language** | English |
|
| 36 |
+
| **Framework** | PyTorch (built from scratch, no pretrained weights) |
|
| 37 |
+
| **Version** | 0.4.0 (Phase 4 ghost-small β 30K steps on the 12.56M-token Phase 3.6 corpus, val_loss 2.3535, overall val PPL 11.12 β capacity-reallocation hypothesis confirmed) |
|
| 38 |
+
|
| 39 |
+
## Model Description
|
| 40 |
+
|
| 41 |
+
GhostLM is a cybersecurity-focused decoder-only transformer language model built entirely from scratch in PyTorch. No pretrained weights, no wrappers β every component (attention, feed-forward, embeddings, training loop) is hand-implemented.
|
| 42 |
+
|
| 43 |
+
The model is trained on CVE vulnerability descriptions from the National Vulnerability Database, CTF writeups, and security research papers. It is designed for cybersecurity reasoning tasks: CVE analysis, exploit explanation, penetration testing assistance, and security concept generation.
|
| 44 |
+
|
| 45 |
+
## Model Variants
|
| 46 |
+
|
| 47 |
+
| Variant | Layers | d_model | Heads | d_ff | Context | Params | Status |
|
| 48 |
+
|---|---|---|---|---|---|---|---|
|
| 49 |
+
| `ghostlm/ghost-tiny` | 2 | 256 | 4 | 1024 | 1024 | 14.7M | Phase 3.5 (historical canonical). 30K steps on ~8.8M tokens, overall PPL 66, PMI suite 31.2% |
|
| 50 |
+
| `ghostlm/ghost-small` | 6 | 512 | 8 | 2048 | 1024 | ~45M | **Phase 4 complete (current canonical). 30K steps on ~12.56M tokens, overall PPL 11.12 (β83%), val_loss 2.3535** |
|
| 51 |
+
| `ghostlm/ghost-base` | 12 | 768 | 12 | 3072 | 1024 | ~350M | Planned (rented GPU) |
|
| 52 |
+
| `ghostlm/ghost-1B` | 24 | 1024 | 16 | 4096 | 1024 | ~1B | Long-term goal |
|
| 53 |
+
|
| 54 |
+
ghost-tiny is the iteration vehicle. The scale ladder above is the path to a genuinely useful from-scratch cyber LM. See [ROADMAP.md](ROADMAP.md) for phased milestones, compute requirements, and corpus targets.
|
| 55 |
+
|
| 56 |
+
## Architecture
|
| 57 |
+
|
| 58 |
+
- **Type:** Decoder-only transformer with causal self-attention
|
| 59 |
+
- **Normalization:** Pre-norm (LayerNorm before attention and FFN sub-layers)
|
| 60 |
+
- **Positional encoding:** Learned positional embeddings
|
| 61 |
+
- **Activation:** GELU
|
| 62 |
+
- **Tokenizer:** GPT-2 BPE via tiktoken (50,257 base tokens + 4 special tokens = 50,261 total)
|
| 63 |
+
- **Weight tying:** Output projection shares weights with token embedding
|
| 64 |
+
- **Attention:** Multi-head causal self-attention with combined QKV projection
|
| 65 |
+
- **Initialization:** Normal(0, 0.02) with scaled residual init (std=0.02/sqrt(2*n_layers)) for projection layers
|
| 66 |
+
|
| 67 |
+
## Training Data
|
| 68 |
+
|
| 69 |
+
The released v0.3.5 checkpoint was trained on the rebalanced Phase 3.5 corpus. NVD's full 333,540-record pull is on disk, but its training contribution is capped at 6M tokens by content-hash subsample so the corpus isn't 90% CVE descriptions:
|
| 70 |
+
|
| 71 |
+
| Source | Records (raw β trained) | Trained tokens | Share | Type |
|
| 72 |
+
|---|---|---|---|---|
|
| 73 |
+
| NVD CVE Database | 333,540 β 71,828 | ~5.74M | **65.3%** | Real, capped via `--max-cve-tokens 6000000` |
|
| 74 |
+
| Synthetic CTF Writeups | 3,000 | ~1.51M | 17.2% | Synthetic, placeholder until real CTFtime grows |
|
| 75 |
+
| arXiv cs.CR Abstracts | 2,000 | ~0.74M | 8.4% | Real |
|
| 76 |
+
| CTFtime real writeups | 473 β 467 | ~0.47M | 5.3% | Real, inline-only, per-record attribution |
|
| 77 |
+
| MITRE ATT&CK | 691 | ~0.26M | 2.9% | Real (Apache 2.0) |
|
| 78 |
+
| CAPEC | 609 | ~0.07M | 0.9% | Real (Apache 2.0) |
|
| 79 |
+
| **Total (post-dedup)** | **74,635** | **~8.79M** | | train: 70,965 / val: 3,670 |
|
| 80 |
+
|
| 81 |
+
**Data splits:** deterministic by content hash β identical or near-duplicate texts always land in the same split. Train/val leakage check returns 0.
|
| 82 |
+
|
| 83 |
+
**Token share comparison (what the model sees):**
|
| 84 |
+
|
| 85 |
+
| Phase | NVD share | Top non-NVD source | Overall |
|
| 86 |
+
|---|---|---|---|
|
| 87 |
+
| v0.3.3 (Phase 3) | 87% | CTF synthetic 5% | NVD-dominated |
|
| 88 |
+
| **v0.3.5 (Phase 3.5)** | **65.3%** | **synthetic 17.2%** | **balanced across 6 sources** |
|
| 89 |
+
|
| 90 |
+
The rebalance is reproducible: `python3 scripts/rebuild_corpus.py --max-cve-tokens 6000000` always produces the same 71,828-record CVE prefix.
|
| 91 |
+
|
| 92 |
+
**Topics covered:** vulnerability detection, adversarial ML, network intrusion, cryptographic protocols, fuzzing, side-channel attacks, ransomware detection, supply chain security, memory safety, WAF evasion, SQL injection, XSS, buffer overflow, privilege escalation, reverse engineering, binary exploitation, steganography, network forensics.
|
| 93 |
+
|
| 94 |
+
For corpus expansion plans (CTFtime, security blogs, MITRE ATT&CK, tool docs) and licensing notes, see [CORPUS.md](CORPUS.md).
|
| 95 |
+
|
| 96 |
+
## Training Details
|
| 97 |
+
|
| 98 |
+
| Parameter | Value |
|
| 99 |
+
|---|---|
|
| 100 |
+
| Optimizer | AdamW (beta1=0.9, beta2=0.95, weight_decay=0.1) |
|
| 101 |
+
| Learning rate | 3e-4 (with cosine decay to 1e-5) |
|
| 102 |
+
| Warmup steps | 2,000 |
|
| 103 |
+
| Gradient clipping | 1.0 |
|
| 104 |
+
| Gradient accumulation | 4 steps |
|
| 105 |
+
| Batch size (Phase 3.5) | 2 (effective batch = 8 with grad_accum) |
|
| 106 |
+
| Max steps (Phase 3.5) | 30,000 |
|
| 107 |
+
| Dropout | 0.1 |
|
| 108 |
+
| Mixed precision | AMP on CUDA, fp32 on CPU |
|
| 109 |
+
|
| 110 |
+
**Weight decay separation:** No weight decay applied to biases, LayerNorm parameters, or embedding weights. Only linear layer weights receive weight decay.
|
| 111 |
+
|
| 112 |
+
**Hardware (Phase 3.5):** Mac Mini M4 (CPU). ~3h13m wall-clock for 30K steps at ~2.4 it/s. Cross-machine workflow: Linux box for data prep, corpus curation, and SSH-driven Mac orchestration; Mac Mini M4 for the training loop. The previous Nemotron-on-Mac harness was replaced this phase by direct `ssh ghostlm-mac` from Linux β drops the email-relay friction and lets the dev box drive the workhorse cleanly.
|
| 113 |
+
|
| 114 |
+
**Phase 1** was run on a ThinkPad Yoga 11e (Celeron N4100) and is preserved as `checkpoints/best_model_phase1.pt`. **Phase 2** is preserved as `checkpoints/best_model_phase2.pt` (val_loss 3.78 on the 2.66M-token corpus). **Phase 3 (v0.3.3)** is preserved as `checkpoints/phase3_refresh/best_model.pt` (val_loss 3.45 on the post-NVD-pull corpus, overall PPL 172).
|
| 115 |
+
|
| 116 |
+
## Evaluation
|
| 117 |
+
|
| 118 |
+
The v0.3.5 model is evaluated on two complementary axes β domain-modeling quality (per-source perplexity) and downstream reasoning (PMI-corrected security task accuracy).
|
| 119 |
+
|
| 120 |
+
### Per-source perplexity on the validation split
|
| 121 |
+
|
| 122 |
+
100 records sampled per source (deterministic seed). Lower is better.
|
| 123 |
+
|
| 124 |
+
| Source | v0.3.3 PPL | v0.3.5 PPL | Ξ% | Reading |
|
| 125 |
+
|---|---|---|---|---|
|
| 126 |
+
| MITRE ATT&CK | 615.43 | 55.14 | **β91%** | Was OOD for v0.3.3; now in training |
|
| 127 |
+
| CTFtime real writeups | 184.24 | 60.71 | **β67%** | Was OOD for v0.3.3; now in training |
|
| 128 |
+
| CAPEC | 326.11 | 133.81 | **β59%** | Was OOD for v0.3.3; now in training |
|
| 129 |
+
| Synthetic CTF | 67.57 | 28.48 | **β58%** | Same data both phases β capacity reallocation |
|
| 130 |
+
| arXiv cs.CR | 671.09 | 354.95 | **β47%** | Same data both phases β capacity reallocation |
|
| 131 |
+
| NVD CVE | 24.19 | 27.55 | +14% | The expected, modest cost |
|
| 132 |
+
| **Overall** | **171.84** | **66.05** | **β62%** | |
|
| 133 |
+
|
| 134 |
+
The rebalance shifted the model from "knows NVD register, treats everything else as generic English" to "models each domain in proportion to its training share." The 47β58% improvements on synthetic CTF and arXiv are particularly notable because **the training data for those sources didn't change** β the gain comes from parameter capacity that v0.3.3 was burning on memorizing duplicate CVE descriptions being redirected onto already-present sources.
|
| 135 |
+
|
| 136 |
+
### PMI-corrected security task accuracy
|
| 137 |
+
|
| 138 |
+
Five classification tasks Γ 25 hand-crafted samples each (125 total). PMI scoring (commit `aee8008`) replaces the previous mode-collapsed length-normalized scoring that reported 4/30 = 13.3% on every phase under logp scoring. Per-task random baseline depends on the number of candidate labels.
|
| 139 |
+
|
| 140 |
+
The eval was expanded from 30 β 125 samples in v0.3.6 β the v0.3.5 model below was re-scored on the larger suite, so the numbers in this table are not directly comparable to the 30-sample numbers in older releases. The expanded suite is the new canonical measurement; future phases will be reported on it. The smaller suite is preserved at `logs/eval_security_phase3.5_pmi.json` for archaeology.
|
| 141 |
+
|
| 142 |
+
| Task | Labels | Random | v0.3.5 (125-sample) | Most-common share |
|
| 143 |
+
|---|---|---|---|---|
|
| 144 |
+
| CVE Severity Classification | 4 | 25.0% | 8/25 (32.0%) | Critical 72% |
|
| 145 |
+
| Vulnerability Type Detection | 10 | 10.0% | 8/25 (32.0%) | IDOR 44% |
|
| 146 |
+
| Attack Technique Identification | 10 | 10.0% | 10/25 (40.0%) | LatMov 36% |
|
| 147 |
+
| CTF Challenge Categorization | 5 | 20.0% | 10/25 (40.0%) | Forensics 64% |
|
| 148 |
+
| MITRE ATT&CK Tactic Classification | 12 | 8.3% | 3/25 (12.0%) | LatMov 40% |
|
| 149 |
+
| **Overall** | β | ~14.5% (avg) | **39/125 (31.2%)** | β |
|
| 150 |
+
|
| 151 |
+
Reading the table:
|
| 152 |
+
|
| 153 |
+
- **Vulnerability Type Detection (+22 pp), Attack Technique Identification (+30 pp), CTF Challenge Categorization (+20 pp)** are the three tasks where v0.3.5 is meaningfully above random. These map onto the corpora that grew during the Phase 3.5 rebalance (CWE-tagged CVE bodies, MITRE technique pages, CTFtime real writeups) and the eval picks up that the model has internalized those domains.
|
| 154 |
+
- **CVE Severity Classification (+7 pp above random with 72% prediction collapse onto Critical).** The model has learned that NVD descriptions usually accompany severe CVEs and bets that way regardless of input. The previous 10-sample suite happened to over-weight Critical/High labels in a way that masked this; the 25-sample suite with balanced severity distribution exposes it. This is the canary metric for whether subsequent training rungs learn calibrated severity reasoning.
|
| 155 |
+
- **MITRE ATT&CK Tactic Classification (+3.7 pp above random).** Tactic-level classification is the model's weakest task β distinguishing Persistence from Privilege Escalation from Defense Evasion is hard from a single description even for humans, and ghost-tiny at 14.7M params on 8.8M tokens has not built that abstraction. This is the metric to watch when ghost-small is trained: if scaling the model doesn't move tactic accuracy above ~25%, the architectural jump didn't produce reasoning gains.
|
| 156 |
+
|
| 157 |
+
#### Cross-phase trajectory on the expanded suite
|
| 158 |
+
|
| 159 |
+
Every preserved ghost-tiny checkpoint was re-scored on the new 125-sample suite so the trajectory is end-to-end comparable. Cells are `correct/total (accuracy) [most-common-share]`:
|
| 160 |
+
|
| 161 |
+
| Task | Phase 1 (2K) | Phase 2 (v0.3.0) | Phase 3 (v0.3.3) | Phase 3.5 (v0.3.5) | Phase 3.6 (v0.3.7) | **Phase 4 (v0.4.0)** |
|
| 162 |
+
|---|---|---|---|---|---|---|
|
| 163 |
+
| CVE Severity Classification | 7/25 (28.0%) [100%] | 5/25 (20.0%) [96%] | 4/25 (16.0%) [48%] | **8/25 (32.0%) [72%]** | 4/25 (16.0%) [60%] | 6/25 (24.0%) [72%] |
|
| 164 |
+
| Vulnerability Type Detection | 3/25 (12.0%) [48%] | 6/25 (24.0%) [76%] | 7/25 (28.0%) [48%] | 8/25 (32.0%) [44%] | 3/25 (12.0%) [96%] | **10/25 (40.0%) [44%]** |
|
| 165 |
+
| Attack Technique Identification | 2/25 (8.0%) [24%] | 3/25 (12.0%) [88%] | 5/25 (20.0%) [72%] | **10/25 (40.0%) [36%]** | 4/25 (16.0%) [60%] | 4/25 (16.0%) [52%] |
|
| 166 |
+
| CTF Challenge Categorization | 2/25 (8.0%) [84%] | 7/25 (28.0%) [76%] | 6/25 (24.0%) [88%] | **10/25 (40.0%) [64%]** | 5/25 (20.0%) [48%] | 7/25 (28.0%) [72%] |
|
| 167 |
+
| MITRE ATT&CK Tactic Classification | 1/25 (4.0%) [72%] | 2/25 (8.0%) [76%] | 3/25 (12.0%) [64%] | 3/25 (12.0%) [40%] | 5/25 (20.0%) [76%] | 2/25 (8.0%) [44%] |
|
| 168 |
+
| **Overall (PMI)** | **15/125 (12.0%)** | **23/125 (18.4%)** | **25/125 (20.0%)** | **39/125 (31.2%)** | **21/125 (16.8%)** | **29/125 (23.2%)** |
|
| 169 |
+
|
| 170 |
+
Phase 4 ghost-small (v0.4.0) is the new canonical model for density / generation work but lands lower than Phase 3.5 on the PMI scoring above. The honest read requires the second column type β **logp scoring** β which the suite also supports via `--scoring logp`:
|
| 171 |
+
|
| 172 |
+
| Phase | PMI | logp | Ξ (PMI β logp) |
|
| 173 |
+
|---|---:|---:|---:|
|
| 174 |
+
| Phase 3.5 (ghost-tiny) | **31.2%** | 17.6% | +13.6 pp |
|
| 175 |
+
| Phase 4 (ghost-small) | 23.2% | **19.2%** | +4.0 pp |
|
| 176 |
+
|
| 177 |
+
Two things to note:
|
| 178 |
+
|
| 179 |
+
1. **PMI flatters Phase 3.5 by 13.6 pp.** PMI subtracts unconditional candidate log-prob to break ties β useful when the model is mode-collapsing because it normalizes for "this candidate is just inherently high-probability". A loose-distribution model with weakly differentiated logits gives PMI more separation to extract; a tight-distribution model gives less. Phase 3.5 (low capacity) gets the bigger PMI uplift; Phase 4 (higher capacity, sharper distribution) gets a smaller one.
|
| 180 |
+
2. **Logp β the more conservative scorer β picks Phase 4.** With logp scoring, Phase 4 narrowly beats Phase 3.5 (19.2% vs 17.6%) on this same 125-sample suite. The PMI vs logp gap diagnoses an eval-methodology limitation rather than a model regression.
|
| 181 |
+
|
| 182 |
+
The cleanest model metric remains per-source val PPL (no scoring rule, just density), where Phase 4 dominates Phase 3.5 by 59β78% across every source. See README's "Per-source perplexity" section for the full table.
|
| 183 |
+
|
| 184 |
+
The clean head-to-head between deliberate moves (PMI suite):
|
| 185 |
+
- **Phase 2β3 (3Γ training volume, fixed corpus): +1.6 pp**
|
| 186 |
+
- **Phase 3β3.5 (corpus rebalance, fixed model+steps): +11.2 pp**
|
| 187 |
+
- **Phase 3.5β3.6 (corpus volume, fixed model+steps): β14.4 pp** (ghost-tiny capacity ceiling)
|
| 188 |
+
- **Phase 3.6β4 (model capacity, fixed corpus+steps): +6.4 pp PMI / +1.6 pp logp / β75% per-source PPL** (capacity-reallocation hypothesis confirmed)
|
| 189 |
+
|
| 190 |
+
Use `make eval-security-all-phases` to re-run end-to-end, or `make eval-compare-phases` to regenerate the PMI table from saved JSONs. Run with `--scoring logp` to reproduce the logp column.
|
| 191 |
+
|
| 192 |
+
### Cyber-text perplexity vs GPT-2 (fixed external test set, ten samples)
|
| 193 |
+
|
| 194 |
+
The benchmark sample is held out from training and unchanged across phases β it's directly comparable.
|
| 195 |
+
|
| 196 |
+
| Phase | Perplexity | vs prior |
|
| 197 |
+
|---|---|---|
|
| 198 |
+
| Phase 1 | 2,183.94 | β |
|
| 199 |
+
| Phase 2 | 152.71 | β93% |
|
| 200 |
+
| Phase 3 (v0.3.3) | 142.09 | β7% |
|
| 201 |
+
| **Phase 3.5 (v0.3.5)** | **96.24** | **β32%** |
|
| 202 |
+
| GPT-2 small (117M) | 26.76 | (frozen baseline) |
|
| 203 |
+
|
| 204 |
+
ghost-tiny is 14.7M params vs GPT-2 small's 117M β so we're closing the cyber-text gap with ~8Γ less capacity. Still far behind GPT-2 in absolute terms, which is correct: a 14.7M-param ghost-tiny is a learning artifact, not a competitor. The trajectory is what matters.
|
| 205 |
+
|
| 206 |
+
### Note on val_loss
|
| 207 |
+
|
| 208 |
+
Final v0.3.5 val_loss is 3.5518 vs v0.3.3's 3.4458. **Do not read this as v0.3.3 being a better model.** The val sets are different β v0.3.5's val covers six sources (NVD, arxiv, ctftime, mitre, capec, synthetic) while v0.3.3's was NVD-dominated. A more diverse val set is harder to predict per-token regardless of model quality. The per-source perplexity table above is the cleaner read.
|
| 209 |
+
|
| 210 |
+
## Intended Uses
|
| 211 |
+
|
| 212 |
+
### Primary use cases
|
| 213 |
+
- CVE analysis and vulnerability explanation
|
| 214 |
+
- CTF challenge reasoning and methodology
|
| 215 |
+
- Penetration testing report generation
|
| 216 |
+
- Security concept explanation and education
|
| 217 |
+
- Cybersecurity text completion and generation
|
| 218 |
+
|
| 219 |
+
### Out-of-scope uses
|
| 220 |
+
- **Production security decisions:** This is a small research model. Do not use it to make real security assessments.
|
| 221 |
+
- **Malware creation:** The model should not be used to develop malicious software or exploits for unauthorized use.
|
| 222 |
+
- **Attacking systems without authorization:** Any use for illegal cybersecurity activity is prohibited.
|
| 223 |
+
|
| 224 |
+
## Limitations
|
| 225 |
+
|
| 226 |
+
- **Small model size:** At 14.7M parameters, ghost-tiny is two-to-three orders of magnitude below production LLMs. Output quality reflects this.
|
| 227 |
+
- **Limited training data:** ~30M tokens is still small for language-model pre-training (Chinchilla-optimal for 14.7M params would be ~300M tokens; for ghost-1B, ~20B tokens). The corpus needs to grow another ~30Γ for the upper rungs of the scale ladder.
|
| 228 |
+
- **Surface-level fluency, weak grounding:** the model has learned the CVE-database register and surface vocabulary of cyber writing β it produces structurally correct CVE descriptions and security-prose grammar β but will hallucinate version chains, mix product names, and bind topic only loosely. See [Sample Generations](#sample-generations) below.
|
| 229 |
+
- **Hallucinated CVE-shaped output:** prompting with `CVE-YYYY-NNNNN is a vulnerability in...` will produce plausible-looking but entirely fabricated CVE descriptions, including invented version chains. **Do not use these as factual.**
|
| 230 |
+
- **No instruction tuning:** ghost-tiny is a base language model. It generates text completions, not structured answers.
|
| 231 |
+
- **Mode-collapse on severity classification:** the model predicts "Critical" on 72% of CVE Severity samples regardless of input. Above random (32% vs 25% baseline) but the prior is doing most of the work. Calibrated severity reasoning is not present at this scale.
|
| 232 |
+
- **No tactic-level abstraction:** MITRE ATT&CK Tactic Classification accuracy is 12% (vs 8.3% random baseline) β the model can identify concrete techniques but not the higher-level adversary goal that groups them. Expected to be the slowest reasoning capability to emerge with scale.
|
| 233 |
+
- **Web/binary CTF confusion:** CTF Categorization scores 100% on Forensics and Cryptography but 0% on Web Exploitation β the model conflates web-exploit writeups with adjacent categories. Targeted corpus expansion in v0.4.0 should focus there.
|
| 234 |
+
- **Synthetic CTF share:** 17.2% of training tokens are synthetic CTF writeups. Replacement with real CTFtime / GitHub writeup corpus is the priority next-track work; the synthetic source will be dropped once real-writeup volume exceeds it.
|
| 235 |
+
- **NVD corpus weight:** 65.3% of training tokens are CVE descriptions (down from 87% pre-rebalance). The model still tilts toward CVE-style prose on neutral prompts.
|
| 236 |
+
- **English only.**
|
| 237 |
+
|
| 238 |
+
## Sample Generations
|
| 239 |
+
|
| 240 |
+
Real outputs from `checkpoints/best_model.pt` (Phase 3, val_loss 3.45), at `temperature=0.8, top_k=50`:
|
| 241 |
+
|
| 242 |
+
> **Prompt:** *A SQL injection attack works by*
|
| 243 |
+
> ...injecting a crafted payload in the "password" parameter. NOTE: some of these details are obtained from third party information. EINXCMS 3.4.7.2.0 contains a cross-site scripting vulnerability that allows users to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session. This issue is fixed in 1.0.10. Use after free in Google Chrome before 17.0.66.75 allowed a remote...
|
| 244 |
+
|
| 245 |
+
> **Prompt:** *CVE-2021-44228 is a vulnerability in*
|
| 246 |
+
> ...the same product which allows local users to overwrite the filesystem via a large value (1) id parameter to the filename parameter to wp-admin/admin/mod_api/admin/admin/admin/login.php in QNAP Business Application Platform, iMC Net-based, iQAP_NAP-V, and MQ-IP-P, MQ-V-M DB2G, MQ-AN-W, MQ-2 A300-
|
| 247 |
+
|
| 248 |
+
> **Prompt:** *To enumerate Active Directory users from a Linux box, an attacker can*
|
| 249 |
+
> ...exploit an attacker to inject malicious scripts as the target system and execute arbitrary code in the context of the current process. The vulnerability is due to the application. An attacker could exploit this vulnerability by submitting a crafted link that executes in the application. A successful exploit could allow the attacker to obtain sensitive information on the user. The BIO plugin for WordPress is vulnerable to Cross-Site Request Forgery in all versions up to, and including, 2.2. This is due to missing nonce validation on...
|
| 250 |
+
|
| 251 |
+
> **Prompt:** *Cross-site scripting (XSS) allows attackers to*
|
| 252 |
+
> ...inject arbitrary web script or HTML via unspecified vectors. The davcodec_server function in libav.c in the Linux kernel before 4.4.0.5, 4.x before 4.6.1, and 4.x before 4.3.0.x before 4.3.3, does not properly restrict access to (1) Ntcpd) PTP-based LDAP (2) X-3-NAP00-00N-N
|
| 253 |
+
|
| 254 |
+
**What this shows compared to Phase 2:** the *register* is correct now β phrases like "Cross-Site Request Forgery in all versions up to, and including, 2.2 β this is due to missing nonce validation," "use after free," "remote attacker," "submitting a crafted link," "in the context of the current process" are all real CVE-database language used in roughly the right context. Phase 2 produced fragments like "the login page is used to the login page's name of the login page does not properly sanitization" β the same model class can't produce that anymore. **The hallucinations are still rampant** (made-up products, scrambled version strings, mixed-up vendor names) β the model has the *form* of CVE descriptions but not the *facts*. This is the expected outcome of corpus expansion at fixed model size: better surface fluency, no new factual capability.
|
| 255 |
+
|
| 256 |
+
## Ethical Considerations
|
| 257 |
+
|
| 258 |
+
GhostLM is trained on cybersecurity content that inherently includes offensive security knowledge β exploit techniques, attack methodologies, and vulnerability details. This is the same information freely available in CVE databases, security conferences, and published research.
|
| 259 |
+
|
| 260 |
+
**Responsible use:**
|
| 261 |
+
- This model is intended for defensive security, education, and research.
|
| 262 |
+
- Users should follow responsible disclosure practices when working with vulnerability information.
|
| 263 |
+
- The model's outputs should not be used to attack systems without explicit authorization.
|
| 264 |
+
- Security professionals should apply the same ethical standards they would to any security tool.
|
| 265 |
+
|
| 266 |
+
**Dual-use risk:** Like any cybersecurity knowledge base, the information the model generates could theoretically be misused. However, the model's small size and limited capabilities make it far less capable than freely available tools and resources already in the security community.
|
| 267 |
+
|
| 268 |
+
## How to Use
|
| 269 |
+
|
| 270 |
+
```python
|
| 271 |
+
import torch
|
| 272 |
+
from ghostlm import GhostLM, GhostLMConfig, GhostTokenizer
|
| 273 |
+
|
| 274 |
+
# Load ghost-tiny
|
| 275 |
+
config = GhostLMConfig.from_preset("ghost-tiny")
|
| 276 |
+
model = GhostLM(config)
|
| 277 |
+
tokenizer = GhostTokenizer()
|
| 278 |
+
|
| 279 |
+
# Load trained weights (v0.3.3 β Phase 3 ghost-tiny refresh)
|
| 280 |
+
checkpoint = torch.load("checkpoints/best_model.pt", map_location="cpu")
|
| 281 |
+
model.load_state_dict(checkpoint["model_state_dict"])
|
| 282 |
+
model.eval()
|
| 283 |
+
|
| 284 |
+
# Generate
|
| 285 |
+
prompt = "A SQL injection attack works by"
|
| 286 |
+
ids = tokenizer.encode(prompt)
|
| 287 |
+
input_tensor = torch.tensor(ids).unsqueeze(0)
|
| 288 |
+
output = model.generate(input_tensor, max_new_tokens=100, temperature=0.8, top_k=50)
|
| 289 |
+
print(tokenizer.decode(output[0].tolist()))
|
| 290 |
+
```
|
| 291 |
+
|
| 292 |
+
## Evaluation (Phase 3 β 30K Steps, post-NVD-pull corpus)
|
| 293 |
+
|
| 294 |
+
### Validation loss
|
| 295 |
+
|
| 296 |
+
- **Final validation loss (step 30000):** **3.4458** (perplexity β 31)
|
| 297 |
+
- **Curve shape:** monotonic decrease over 60 eval points; no instability, still slightly descending at step 30K (diminishing returns rather than plateau).
|
| 298 |
+
- Comparison: Phase 2 val_loss 3.7813 on the 2.66M-token corpus. Both runs use the deterministic-hash split, so the **0.34 nat drop is a real corpus-quality dividend at fixed model size**.
|
| 299 |
+
|
| 300 |
+
### Perplexity vs GPT-2 (cyber-text benchmark)
|
| 301 |
+
|
| 302 |
+
Same hardcoded `BENCHMARK_TEXTS` set used for every prior phase (10 cyber-text samples, fair comparison):
|
| 303 |
+
|
| 304 |
+
| Model | Perplexity (lower is better) |
|
| 305 |
+
|---|---|
|
| 306 |
+
| GPT-2 (124M baseline) | **26.76** |
|
| 307 |
+
| **ghost-tiny β Phase 3 (released)** | **142.09** |
|
| 308 |
+
| ghost-tiny β Phase 2 | 152.71 |
|
| 309 |
+
| ghost-tiny β Phase 1 | 2,183.94 |
|
| 310 |
+
|
| 311 |
+
Phase 3 is **7% better** than Phase 2 on this benchmark and **15.4Γ better** than Phase 1. Still 5.3Γ behind GPT-2, expected for a 14.7M-param model on ~30M tokens vs. a 124M-param model on ~40B tokens of WebText. The Phase 2β3 gain is modest because the 10-text benchmark contains generic security prose that already overlapped both corpora β most of the perplexity dividend was earned at Phase 2 (corpus quality + clean split), and the residual gain at Phase 3 is from the larger volume. Raw output: `logs/benchmark_phase3.json`.
|
| 312 |
+
|
| 313 |
+
### Security-domain task evaluation
|
| 314 |
+
|
| 315 |
+
Re-run on the Phase 3 checkpoint via `scripts/eval_security.py` (3 tasks, 30 questions: CVE Severity Classification, Vulnerability Type Detection, Attack Technique Identification):
|
| 316 |
+
|
| 317 |
+
| Phase | Score | Failure mode |
|
| 318 |
+
|---|---|---|
|
| 319 |
+
| Phase 1 | 4/30 (13.3%) | Mode-collapsed |
|
| 320 |
+
| Phase 2 | 4/30 (13.3%) | Mode-collapsed: predicts "High" / "Cross-Site Scripting" / "Supply Chain Compromise" |
|
| 321 |
+
| **Phase 3** | **4/30 (13.3%)** | Mode-collapsed: predicts "Medium-or-High" / "Cross-Site Scripting" / "DLL Search Order Hijacking" |
|
| 322 |
+
|
| 323 |
+
Same numerical score as prior phases, **but with a different mode-collapse pattern** β the model has learned the *most frequent label per task* rather than the discriminative structure, and at Phase 3 the most-frequent attack technique label has shifted (from Supply Chain Compromise to DLL Search Order Hijacking) reflecting the corpus shift. CVE-severity picks up some genuine discrimination (gets 2 right by mixing in Mediums). **Random-guess baseline is ~33%** (4-way multiple choice), so 13.3% is below random β confirming the model is not yet doing real classification at this scale. Raw output: `logs/eval_security_phase3.json`.
|
| 324 |
+
|
| 325 |
+
**What this means:** the corpus-expansion dividend is real on language modeling (val_loss + perplexity) but invisible on structured-task eval. Both numbers are baselines for the next scale rung β ghost-small at ~55M params is where structured-task eval should start to reward better corpus.
|
| 326 |
+
|
| 327 |
+
### Phase comparison plot
|
| 328 |
+
|
| 329 |
+
`logs/phase_comparison.png` shows final val_loss, perplexity (vs GPT-2 baseline), and security-task accuracy across all three phases side by side. Generated by `scripts/plot_phase_comparison.py`.
|
| 330 |
+
|
| 331 |
+
### Training curve
|
| 332 |
+
|
| 333 |
+
`logs/phase3_refresh/training_curve.png` shows the 30K-step Phase 3 curve. Phase 1 and Phase 2 logs were too sparse for real curves (3β5 endpoint datapoints); Phase 3 has 60 eval points, the first dense ghost-tiny training curve we've ever produced.
|
| 334 |
+
|
| 335 |
+
## Citation
|
| 336 |
+
|
| 337 |
+
```bibtex
|
| 338 |
+
@misc{ghostlm2026,
|
| 339 |
+
author = {Joe Munene},
|
| 340 |
+
title = {GhostLM: An Open-Source Cybersecurity-Focused Language Model},
|
| 341 |
+
year = {2026},
|
| 342 |
+
publisher = {GitHub},
|
| 343 |
+
url = {https://github.com/joemunene-by/GhostLM}
|
| 344 |
+
}
|
| 345 |
+
```
|
| 346 |
+
|
| 347 |
+
## Links
|
| 348 |
+
|
| 349 |
+
- **GitHub:** [github.com/joemunene-by/GhostLM](https://github.com/joemunene-by/GhostLM)
|
| 350 |
+
- **Author:** [Joe Munene](https://github.com/joemunene-by)
|
| 351 |
+
- **License:** [MIT](LICENSE)
|
config.json
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"vocab_size": 50264,
|
| 3 |
+
"context_length": 1024,
|
| 4 |
+
"d_model": 512,
|
| 5 |
+
"n_heads": 8,
|
| 6 |
+
"n_layers": 6,
|
| 7 |
+
"d_ff": 2048,
|
| 8 |
+
"dropout": 0.1,
|
| 9 |
+
"bias": true,
|
| 10 |
+
"use_rope": false,
|
| 11 |
+
"rope_base": 10000.0,
|
| 12 |
+
"use_swiglu": false,
|
| 13 |
+
"use_rmsnorm": false,
|
| 14 |
+
"use_flash_attention": false,
|
| 15 |
+
"batch_size": 8,
|
| 16 |
+
"learning_rate": 3e-05,
|
| 17 |
+
"weight_decay": 0.1,
|
| 18 |
+
"beta1": 0.9,
|
| 19 |
+
"beta2": 0.95,
|
| 20 |
+
"grad_clip": 1.0,
|
| 21 |
+
"grad_accum_steps": 4,
|
| 22 |
+
"warmup_steps": 120,
|
| 23 |
+
"max_steps": 1800,
|
| 24 |
+
"eval_interval": 100,
|
| 25 |
+
"save_interval": 600,
|
| 26 |
+
"data_dir": "data/processed",
|
| 27 |
+
"checkpoint_dir": "checkpoints/phase5_chat_v3",
|
| 28 |
+
"log_dir": "logs/phase5_chat_v3",
|
| 29 |
+
"device": "auto",
|
| 30 |
+
"dtype": "float32",
|
| 31 |
+
"seed": 42,
|
| 32 |
+
"use_wandb": false
|
| 33 |
+
}
|
pytorch_model.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1c2dbbb3f2559153953cdec8c0e8adbcdf0659fe4b61c3eb05a4e21c6b216f0
|
| 3 |
+
size 542187521
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"tokenizer_type": "tiktoken",
|
| 3 |
+
"vocab_size": 50261,
|
| 4 |
+
"model": "gpt2"
|
| 5 |
+
}
|