asdf98 commited on
Commit
1470d5b
·
verified ·
1 Parent(s): 66f745d

Add README.md

Browse files
Files changed (1) hide show
  1. README.md +280 -0
README.md ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🔨 MicroForge: A Novel Mobile-First Image Generation Architecture
2
+
3
+ > **Recurrent Latent Planning × SSM-Conv Hybrid Backbone × Deep Compression**
4
+
5
+ MicroForge is a genuinely new image generation architecture designed from scratch for consumer devices (3-4 GB RAM), trainable on a single 16 GB GPU. It combines the best ideas from recent research into an efficient, compact, editing-ready system.
6
+
7
+ **Key numbers:**
8
+ - **MicroForge-tiny**: 28M params, ~56 MB fp16, ~0.13s/image on CPU
9
+ - **MicroForge-small**: 114M params, ~228 MB fp16
10
+ - **MicroForge-base**: 193M params, ~386 MB fp16
11
+ - **Editing-ready**: Same backbone handles generation, editing, inpainting, super-res
12
+
13
+ ---
14
+
15
+ ## Table of Contents
16
+
17
+ 1. [Architecture Overview](#1-architecture-overview)
18
+ 2. [Paper Shortlist & Critique](#2-paper-shortlist--critique)
19
+ 3. [Module-by-Module Design](#3-module-by-module-design)
20
+ 4. [Mathematical Formulation](#4-mathematical-formulation)
21
+ 5. [Training Objective](#5-training-objective)
22
+ 6. [Memory & Compute Budget](#6-memory--compute-budget)
23
+ 7. [Training Curriculum](#7-training-curriculum)
24
+ 8. [Mobile Deployment Plan](#8-mobile-deployment-plan)
25
+ 9. [Failure Mode Analysis](#9-failure-mode-analysis)
26
+ 10. [Ablation Plan](#10-ablation-plan)
27
+ 11. [Editing Roadmap](#11-editing-roadmap)
28
+ 12. [Quick Start](#12-quick-start)
29
+
30
+ ---
31
+
32
+ ## 1. Architecture Overview
33
+
34
+ ```
35
+ ┌─────────────────────────────────────────────────────────────────┐
36
+ │ MicroForge Pipeline │
37
+ ├─────────────────────────────────────────────────────────────────┤
38
+ │ │
39
+ │ Text ──→ [Text Encoder (CLIP/TinyCLIP)] ──→ text_emb, pooled │
40
+ │ │ │
41
+ │ ▼ │
42
+ │ Noise z_T ──→ [Recurrent Latent Planner] │
43
+ │ │ K=32 plan tokens (49 KB state) │
44
+ │ │ READ: cross-attn(plan, z_t) — O(K·N) │
45
+ │ │ REASON: self-attn(plan) — O(K²) │
46
+ │ │ Self-condition from previous step │
47
+ │ ▼ │
48
+ │ z_t ──→ [SSM-Conv Hybrid Backbone] ◄── planner_tokens │
49
+ │ │ Per block (×6/12/18): │
50
+ │ │ 1. AdaLN-Group(z_t, t_emb + text_pool) │
51
+ │ │ 2. BiSSM(zigzag scan) — O(N) │
52
+ │ │ 3. CrossAttn(z_t, text_emb ∥ plan) — O(N·M) │
53
+ │ │ 4. FFN(expansion=3) — O(N·D) │
54
+ │ │ Every K blocks: SharedMQA(z_t) — single instance │
55
+ │ ▼ │
56
+ │ v_pred = backbone(z_t, t, text, plan) │
57
+ │ z_{t-1} = z_t + Δt · v_pred (Euler ODE step) │
58
+ │ │
59
+ │ z_0 ──→ [DC-VAE Decoder (32× upsample)] ──→ Image [3,H,W] │
60
+ │ │
61
+ │ ┌─── Editing Mode (same backbone) ────────────────────┐ │
62
+ │ │ z_input = [z_target_noise ∥ z_source] (width-concat) │ │
63
+ │ │ Task token: [Generate] / [Edit] / [Inpaint] / [SR] │ │
64
+ │ │ No extra parameters needed │ │
65
+ │ └──────────────────────────────────────────────────────┘ │
66
+ └─────────────────────────────────────────────────────────────────┘
67
+ ```
68
+
69
+ ### What's Novel
70
+
71
+ 1. **Recurrent Latent Planner (RLP)**: Persistent latent tokens that carry "memory" across denoising steps. The planner reasons at a higher level before the backbone commits to pixel changes. Inspired by RIN (Jabri et al., 2022) but adapted for diffusion: plan tokens READ from the noised latent, REASON internally via self-attention, then inject guidance into the backbone via cross-attention. Self-conditioning carries plan state across steps.
72
+
73
+ 2. **SSM-Conv Hybrid Backbone**: Replaces O(N²) self-attention with bidirectional SSM scanning (O(N)) plus local DWConv. One globally-shared lightweight MQA attention block provides in-context learning capability. This hybrid achieves the global receptive field of attention with linear complexity.
74
+
75
+ 3. **Deep Compression VAE with Residual Shortcuts**: 32× spatial compression using space-to-channel rearrangement as non-parametric skip connections. 512px → 16×16×32 latent = only 256 spatial tokens (vs 4096 in SD-VAE).
76
+
77
+ 4. **Editing by Design**: DreamLite-style spatial concatenation enables generation, editing, inpainting, and super-resolution with zero extra parameters. The same backbone processes all tasks.
78
+
79
+ ---
80
+
81
+ ## 2. Paper Shortlist & Critique
82
+
83
+ ### A. Efficient Image Generation
84
+
85
+ | Paper | Problem Solved | What to Borrow | Failure Modes |
86
+ |-------|---------------|----------------|---------------|
87
+ | **SANA-Sprint** (2503.09641) | 1-step generation, 0.6B params | Linear DiT + DC-AE latent + sCM+LADD distillation | Text encoder dominates memory |
88
+ | **SnapGen** (2412.09619) | Mobile T2I, 0.38B, iPhone 15 | Remove SA from high-res, MQA, expanded separable conv | No public weights |
89
+ | **SnapGen++** (2601.08303) | 360ms/step iPhone, 0.4B | ASSA, elastic supernetwork, tiny VAE | Proprietary |
90
+ | **DreamLite** (2603.28713) | Mobile gen+edit unified | Spatial concat, task-progressive training | No public weights |
91
+
92
+ ### B. Subquadratic Backbones
93
+
94
+ | Paper | Problem Solved | What to Borrow | Failure Modes |
95
+ |-------|---------------|----------------|---------------|
96
+ | **DiMSUM** (2411.04168) | Best FID with Mamba, 3× faster convergence | Wavelet+Mamba, shared attention block | Complex implementation |
97
+ | **ZigMa** (2403.13802) | Spatial continuity for SSM | Zigzag-8 scan, heterogeneous layers | Only class-conditional |
98
+ | **LiT** (2501.12976) | Pure linear DiT | DWConv inside linear attn, weight inheritance | Small quality drop at low res |
99
+
100
+ ### C. Compact Latent Spaces
101
+
102
+ | Paper | Problem Solved | What to Borrow | Failure Modes |
103
+ |-------|---------------|----------------|---------------|
104
+ | **DC-AE** (2410.10733) | 32-128× compression | Residual space-to-channel shortcuts | High-channel needs bigger backbone |
105
+ | **TiTok** (2406.07550) | 32-128 1D tokens | Break 2D grid, proxy-code VQ | Resolution-fixed |
106
+
107
+ ### D. Editing Patterns
108
+
109
+ | Paper | Problem Solved | What to Borrow | Failure Modes |
110
+ |-------|---------------|----------------|---------------|
111
+ | **DreamLite** (2603.28713) | Mobile gen+edit | Spatial concat (+14 GenEval vs channel) | Editing data at scale |
112
+ | **FLUX Kontext** (2506.15742) | Best editing quality | 3D RoPE offset, multi-reference | 12B, not mobile |
113
+ | **RIN** (2212.11972) | Decoupled computation | Latent tokens + cross-attn, self-cond | Pixel-space only |
114
+
115
+ ---
116
+
117
+ ## 3. Module-by-Module Design
118
+
119
+ ### Module A: Deep Compression VAE (`microforge/vae.py`)
120
+
121
+ 32× spatial compression with space-to-channel residual shortcuts (DC-AE technique).
122
+
123
+ | Config | Channels | Latent C | Params | FP16 |
124
+ |--------|----------|----------|--------|------|
125
+ | tiny | [32,64,128,256] | 16 | 16M | 32 MB |
126
+ | small | [64,128,256,512] | 32 | 77M | 154 MB |
127
+ | base | [128,256,512,512] | 32 | 110M | 220 MB |
128
+
129
+ ### Module B: SSM-Conv Hybrid Backbone (`microforge/backbone.py`)
130
+
131
+ Bidirectional SSM + local DWConv + one globally-shared MQA attention.
132
+
133
+ | Config | Depth | Dim | Params | FP16 |
134
+ |--------|-------|-----|--------|------|
135
+ | tiny | 6 | 256 | 8M | 16 MB |
136
+ | small | 12 | 384 | 29M | 58 MB |
137
+ | base | 18 | 512 | 71M | 142 MB |
138
+
139
+ ### Module C: Recurrent Latent Planner (`microforge/planner.py`)
140
+
141
+ 32 persistent plan tokens, 49 KB state per plan. O(K²+K·N) per layer.
142
+
143
+ ### Module D: Text Encoder (pluggable)
144
+ - Mobile: TinyCLIP ~60M
145
+ - Quality: CLIP-L ~428M
146
+ - Best: Gemma-2-2B ~2B
147
+
148
+ ---
149
+
150
+ ## 4. Mathematical Formulation
151
+
152
+ **Rectified Flow**: z_t = (1-t)·z_0 + t·ε
153
+
154
+ **Velocity target**: v* = ε - z_0
155
+
156
+ **Training loss**: L = E[w(t) · ||v_θ(z_t, t, c) - v*||²] where w(t) = 1/(1+|2t-1|)
157
+
158
+ **Sampling**: z_{t-Δt} = z_t + Δt · v_θ(z_t, t, c)
159
+
160
+ **Planner self-conditioning**: p_t = σ(w)·p_{t+1} + (1-σ(w))·p_init(text)
161
+
162
+ **CFG**: v̂ = v_∅ + s·(v_c - v_∅)
163
+
164
+ ---
165
+
166
+ ## 5. Training Objective
167
+
168
+ - **Stage 1 (VAE)**: L1 + λ_KL·KL + LPIPS + GAN
169
+ - **Stage 2-3 (Flow)**: w(t)·||v_θ - v*||²
170
+ - **Stage 4 (KD)**: L_flow + λ_t·α(t)·||v_student - v_teacher||²
171
+ - **Stage 5 (Edit)**: ||v_θ([z_t|z_src], t, c_edit) - v*||²
172
+ - **Stage 6 (Distill)**: ||f_θ(z_t, t) - f_{θ⁻}(z_t', t')||²
173
+
174
+ ---
175
+
176
+ ## 6. Memory & Compute Budget
177
+
178
+ ### Total System Memory (FP16, no text encoder)
179
+ - **Tiny**: ~76 MB inference @ 512px
180
+ - **Small**: ~308 MB inference @ 512px
181
+ - **Base**: ~530 MB inference @ 512px
182
+
183
+ With TinyCLIP (+120 MB) → under 500 MB for tiny config.
184
+
185
+ ---
186
+
187
+ ## 7. Training Curriculum (16 GB GPU)
188
+
189
+ | Stage | Freeze | Train | Data | Res | Steps | LR | Time (T4) |
190
+ |-------|--------|-------|------|-----|-------|----|-----------|
191
+ | 1. VAE | — | VAE | ImageNet-50K | 128→256 | 50K | 1e-4 | 6h |
192
+ | 2. Low-Res | VAE | Backbone+Plan | Synthetic 100K | 128→256 | 100K | 1e-4 | 12h |
193
+ | 3. High-Res | VAE | Backbone+Plan | Same+high-res | 256→512 | 50K | 5e-5 | 8h |
194
+ | 4. Distill | VAE | Backbone+Plan | Teacher cached | 512 | 30K | 2e-5 | 6h |
195
+ | 5. Edit | VAE | All (low LR) | IP2P+MagicBrush | 256→512 | 20K | 1e-5 | 4h |
196
+
197
+ ---
198
+
199
+ ## 8. Mobile Deployment
200
+
201
+ 1. Step distill to 4 steps (consistency/LADD)
202
+ 2. Export ONNX with static shapes
203
+ 3. INT8 weight quantization
204
+ 4. Convert to CoreML/NNAPI/QNN
205
+ 5. Profile on-device
206
+
207
+ ---
208
+
209
+ ## 9. Failure Modes
210
+
211
+ | Failure | Fix |
212
+ |---------|-----|
213
+ | SSM scan artifacts | More scan directions + larger DWConv |
214
+ | Planner collapse | Diversity loss on plan tokens |
215
+ | VAE blur | Reduce λ_KL + adversarial loss |
216
+ | Training instability | Grad clip=2.0 + separate SSM LR |
217
+ | Editing forgetting | Spatial concat + progressive training |
218
+
219
+ ---
220
+
221
+ ## 10. Ablation Plan
222
+
223
+ | ID | Change | Expected |
224
+ |----|--------|----------|
225
+ | A1 | No Planner | -2-5% FID |
226
+ | A2 | Full attention (no SSM) | Better@256, worse@1024, 2-4× slower |
227
+ | A3 | No shared MQA | -1-3% FID |
228
+ | A4 | No DWConv in SSM | -2-4% FID |
229
+ | A5 | No self-conditioning | More step jitter |
230
+ | A6 | Full vs grouped adaLN | +46% params, marginal gain |
231
+ | A7 | f16 vs f32 vs f64 VAE | f32 sweet spot |
232
+ | A8 | Spatial vs channel concat | Spatial preserves gen quality |
233
+
234
+ ---
235
+
236
+ ## 11. Editing Roadmap
237
+
238
+ - ✅ Phase 1: Architecture supports spatial concatenation
239
+ - Phase 2: Image editing (InstructPix2Pix data)
240
+ - Phase 3: Inpainting (masked spatial concat)
241
+ - Phase 4: Super-resolution
242
+ - Phase 5: Style/reference (add IP-Adapter, +22M params)
243
+ - Phase 6: Local editing (region-aware planner)
244
+
245
+ ---
246
+
247
+ ## 12. Quick Start
248
+
249
+ ```python
250
+ import torch
251
+ from microforge.vae import MicroForgeVAE
252
+ from microforge.backbone import MicroForgeBackbone
253
+ from microforge.planner import RecurrentLatentPlanner
254
+ from microforge.pipeline import MicroForgePipeline, SimpleTextEncoder
255
+
256
+ vae = MicroForgeVAE(config='tiny')
257
+ backbone = MicroForgeBackbone(latent_channels=16, config='tiny')
258
+ planner = RecurrentLatentPlanner(num_plan_tokens=16, dim=256, text_dim=768, latent_channels=16)
259
+ text_enc = SimpleTextEncoder(embed_dim=768, num_layers=2)
260
+ pipeline = MicroForgePipeline(vae, backbone, text_enc, planner)
261
+
262
+ tokens = torch.randint(0, 8192, (1, 10))
263
+ images = pipeline.text2img(tokens, height=256, width=256, num_steps=4)
264
+ ```
265
+
266
+ ---
267
+
268
+ ## License
269
+
270
+ MIT License
271
+
272
+ ## Citation
273
+
274
+ ```bibtex
275
+ @software{microforge2025,
276
+ title={MicroForge: Mobile-First Image Generation with Recurrent Latent Planning},
277
+ year={2025},
278
+ url={https://huggingface.co/asdf98/microforge}
279
+ }
280
+ ```