Update README.md
Browse files
README.md
CHANGED
@@ -14,8 +14,11 @@ Can LLMs learn from 1,000 in-context examples?
|
|
14 |
Introducing **MachineLearningLM** 🧪📊 — a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning.
|
15 |
|
16 |
📈 **Scales from 8 to 1,024 examples**
|
|
|
17 |
📈 **~15% improvement** on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B
|
|
|
18 |
🌲 **Random-Forest–level robustness**
|
|
|
19 |
🧠 **MMLU score: 75.4%**
|
20 |
|
21 |
📄 Read the paper: https://arxiv.org/abs/2509.06806
|
|
|
14 |
Introducing **MachineLearningLM** 🧪📊 — a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning.
|
15 |
|
16 |
📈 **Scales from 8 to 1,024 examples**
|
17 |
+
|
18 |
📈 **~15% improvement** on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B
|
19 |
+
|
20 |
🌲 **Random-Forest–level robustness**
|
21 |
+
|
22 |
🧠 **MMLU score: 75.4%**
|
23 |
|
24 |
📄 Read the paper: https://arxiv.org/abs/2509.06806
|