add results table
Browse files
README.md
CHANGED
|
@@ -22,6 +22,15 @@ This model was produced using **Simple Self-Distillation (SSD)**, a method that
|
|
| 22 |
|
| 23 |
SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
## Paper
|
| 26 |
|
| 27 |
**Embarrassingly Simple Self-Distillation Improves Code Generation**
|
|
|
|
| 22 |
|
| 23 |
SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
|
| 24 |
|
| 25 |
+
## Results
|
| 26 |
+
|
| 27 |
+
LiveCodeBench (%)
|
| 28 |
+
|
| 29 |
+
| Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
|
| 30 |
+
|---|---|---|---|---|
|
| 31 |
+
| Qwen3-4B-Thinking-2507 (base) | 54.5 | 67.5 | 59.6 | 70.3 |
|
| 32 |
+
| **+ SSD (this model)** | **57.8** (+3.3) | **71.4** (+3.9) | **63.1** (+3.5) | **74.7** (+4.4) |
|
| 33 |
+
|
| 34 |
## Paper
|
| 35 |
|
| 36 |
**Embarrassingly Simple Self-Distillation Improves Code Generation**
|