rogermt commited on
Commit
91fd7ed
·
verified ·
1 Parent(s): c95af57

Add TODO.md — next steps for NSGF++ reproduction

Browse files
Files changed (1) hide show
  1. TODO.md +161 -0
TODO.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TODO.md — Next Steps for NSGF++ Reproduction
2
+
3
+ ## Current Status
4
+
5
+ | Experiment | Pool Building | Phase 1 (NSGF) | Phase 2 (NSF) | Phase 3 (Predictor) | Inference | Eval |
6
+ |-----------|:---:|:---:|:---:|:---:|:---:|:---:|
7
+ | **2D 8gaussians** | ✅ | ✅ | — | — | ✅ | ✅ W2=2.04 (small run) |
8
+ | **MNIST** | ✅ | 🔶 runs, loss converging (~0.03), interrupted at 9.5K/100K | untested on GPU | untested on GPU | untested | untested |
9
+ | **CIFAR-10** | 🔶 OOM fixed (batch 128→32), untested on GPU | untested | untested | untested | untested | untested |
10
+
11
+ ✅ = verified working 🔶 = partially done ❌ = blocked
12
+
13
+ ---
14
+
15
+ ## Immediate — Run Full Experiments
16
+
17
+ ### 1. MNIST full run on T4
18
+
19
+ The most important next step. All code bugs are fixed. Need a clean Kaggle run.
20
+
21
+ ```bash
22
+ cd /kaggle/working/ && rm -rf nsgf-plusplus
23
+ git clone https://huggingface.co/rogermt/nsgf-plusplus
24
+ cd nsgf-plusplus && pip install -r requirements.txt
25
+
26
+ # Phase 1: pool (~7 min) + NSGF training (100K steps, ~2.5 hrs)
27
+ python main.py --experiment mnist
28
+
29
+ # If session runs out, next session:
30
+ python main.py --experiment mnist --resume-phase 2
31
+
32
+ # If Phase 2 done:
33
+ python main.py --experiment mnist --resume-phase 3
34
+ ```
35
+
36
+ **Expected runtimes on T4:**
37
+ - Pool building (1500 batches): ~7 min
38
+ - Phase 1 NSGF (100K steps): ~2.5 hours
39
+ - Phase 2 NSF (100K steps): ~3-4 hours (each step does NSGF inference + NSF forward/backward)
40
+ - Phase 3 Predictor (40K steps): ~1.5 hours
41
+ - **Total: ~7-8 hours** — tight for one 9-hour Kaggle session
42
+
43
+ **Alternative: use `--train-iters 50000` for Phase 1+2 to fit in one session, accept lower quality.**
44
+
45
+ **Paper target: FID ≈ 3.8 at NFE=60**
46
+
47
+ ---
48
+
49
+ ### 2. CIFAR-10 first test on T4
50
+
51
+ After MNIST works, test CIFAR with reduced Sinkhorn batch.
52
+
53
+ ```bash
54
+ # Smoke test first (should run ~2 min)
55
+ python main.py --experiment cifar10 --pool-batches 10 --train-iters 50
56
+
57
+ # If smoke test passes, real Phase 1:
58
+ python main.py --experiment cifar10 --train-iters 50000
59
+
60
+ # Subsequent sessions:
61
+ python main.py --experiment cifar10 --resume-phase 2 --train-iters 50000
62
+ python main.py --experiment cifar10 --resume-phase 3
63
+ ```
64
+
65
+ **If still OOMs**: try `--sinkhorn-batch 16 --pool-batches 20000`
66
+
67
+ **Paper target: FID ≈ 5.55, IS ≈ 8.86 at NFE=59**
68
+
69
+ ---
70
+
71
+ ### 3. 2D full-scale run
72
+
73
+ Quick win to validate against paper numbers. Should take ~20 min on T4.
74
+
75
+ ```bash
76
+ python main.py --experiment 2d --dataset 8gaussians --steps 10
77
+ ```
78
+
79
+ **Paper target: W2 ≈ 0.285 for 8gaussians**
80
+
81
+ Current small-run W2=2.04 is expected — only used 10 pool batches + 1000 iters. Full run (200 batches, 20K iters) should drop dramatically.
82
+
83
+ Also run other 2D datasets:
84
+ ```bash
85
+ python main.py --experiment 2d --dataset moons --steps 10
86
+ python main.py --experiment 2d --dataset scurve --steps 10
87
+ python main.py --experiment 2d --dataset checkerboard --steps 10
88
+ ```
89
+
90
+ ---
91
+
92
+ ## Medium-term — Code Improvements
93
+
94
+ ### 4. Step-level resume within phases
95
+
96
+ Current `--resume-phase` skips completed phases but restarts the current phase from step 0. For 100K-step phases, mid-phase interruption still loses progress. Need:
97
+ - Load `nsgf_checkpoint.pt` / `nsf_checkpoint.pt` / `predictor_checkpoint.pt`
98
+ - Resume optimizer state + step counter
99
+ - Continue from last checkpoint step
100
+
101
+ ### 5. EMA (Exponential Moving Average) for image models
102
+
103
+ Paper uses EMA for MNIST and CIFAR-10 (standard in diffusion/flow models). Current code doesn't implement EMA. This likely affects FID significantly.
104
+
105
+ ### 6. Learning rate scheduler
106
+
107
+ Paper may use cosine decay or warmup. Currently using constant lr. Check if this matters for convergence.
108
+
109
+ ### 7. FID evaluation correctness
110
+
111
+ Verify that `evaluation.py`'s FID computation matches the standard protocol:
112
+ - InceptionV3 features from `pool3` layer (2048-dim)
113
+ - 10K generated vs 10K test samples
114
+ - Proper image preprocessing (resize to 299×299 for Inception)
115
+ - Compare against `pytorch-fid` or `clean-fid` for sanity check
116
+
117
+ ### 8. Inception Score evaluation
118
+
119
+ Implement properly for CIFAR-10 if not already correct. Paper reports IS=8.86.
120
+
121
+ ---
122
+
123
+ ## Longer-term — Towards Paper Numbers
124
+
125
+ ### 9. Full paper hyperparameters
126
+
127
+ Once code is stable, run with exact paper configs (no iteration reduction):
128
+ - MNIST: 100K + 100K + 40K iterations
129
+ - CIFAR-10: 200K + 200K + 40K iterations
130
+ - This requires A100 or multiple Kaggle sessions with checkpointing
131
+
132
+ ### 10. Ablation: NSGF vs NSGF++
133
+
134
+ Run NSGF-only (Phase 1 only, no straight flow) and compare FID/W2 against NSGF++ to verify the two-phase approach actually helps. Paper shows clear improvement.
135
+
136
+ ### 11. NFE sweep
137
+
138
+ Paper reports results at various NFE (number of function evaluations). Test:
139
+ - MNIST: NFE = 10, 20, 40, 60
140
+ - CIFAR: NFE = 10, 20, 40, 59
141
+ - Compare FID vs NFE curve against paper's Figure 3
142
+
143
+ ### 12. pykeops for faster Sinkhorn
144
+
145
+ Install `pykeops` to enable geomloss `online` backend. This avoids materializing the full N×N cost matrix and should be much faster + lower VRAM for image experiments. Could enable using paper's original batch_size=128 on T4.
146
+
147
+ ```bash
148
+ pip install pykeops
149
+ # Then in config or code:
150
+ # backend: "online" instead of "tensorized"
151
+ ```
152
+
153
+ ---
154
+
155
+ ## Known Limitations
156
+
157
+ - **Single-GPU only** — no DDP, T4×2 wastes one GPU
158
+ - **No EMA** — standard in flow/diffusion, likely hurts FID
159
+ - **No mixed precision** — fp32 only, could halve VRAM with fp16/bf16
160
+ - **No gradient accumulation** — batch size is hard-limited by VRAM
161
+ - **Kaggle checkpoint persistence** — checkpoints lost between sessions unless manually saved