File size: 1,999 Bytes
883d5c4
 
 
 
 
 
 
dd7c905
 
 
 
 
 
 
 
 
038f7af
8411c85
038f7af
dd7c905
038f7af
dd7c905
 
41a2e63
dd7c905
6e76549
dd7c905
 
 
d9a9a6e
 
dd7c905
d9a9a6e
dd7c905
 
83cb1e7
dd7c905
 
 
b78f0ce
dd7c905
b9b52c2
d9a9a6e
 
 
b9b52c2
 
d9a9a6e
b9b52c2
 
 
 
 
 
 
 
 
 
 
 
 
 
d9a9a6e
b9b52c2
dd7c905
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
MachineLearningML: Continued Pretraining Language Models on Millions of Synthetic Tabular Prediction Tasks Scales In-Context ML
license: apache-2.0
base_model:
- Qwen/Qwen2.5-7B-Instruct
---

# MachineLearningLM

## model summary

Can LLMs learn from 1,000 in-context examples?

Introducing **MachineLearningLM** 🧪📊 — a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning.

📈 **Scales from 8 to 1,024 examples**

📈 ​**​~15% improvement​**​ on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B-Instruct

🌲 ​**​Random-Forest–level robustness​**​
 
🧠 ​**​MMLU score: 75.4%​**​

📄 Read the paper:  https://huggingface.co/papers/2509.06806

   GitHub: https://github.com/HaoAreYuDong/MachineLearningLM

## evaluation and validation

We have developed an automated evaluation framework — simply configure the parameters to easily perform validation and evaluation. 
**The code is now open-sourced at our GitHub.**

**Quick Start**

```bash
pip install -r requirements.txt
python ./src/evaluation/model_pred/dl_model_pred.py \
  --input_dir ./demo_input.jsonl \
  --output_dir ./demo_output.jsonl \
  --model_name MachineLearningLM/MachineLearningLM-7B-v1
```
**pipeline**
```bash
# modify the evaluate_parameters.sh file
source evaluate_parameters.sh

# Option 1  End-to-End Pipeline
./scripts/evaluate_pipeline.sh

# Option 2  Parallel Processing
./scripts/multi_process/data_prep.sh
./scripts/multi_process/prompt_gen.sh  # For deep learning only
./scripts/multi_process/model_pred.sh
./scripts/multi_process/evaluation.sh
./scripts/multi_process/report.sh

# Option3   Sequential Processing
./scripts/single_process/data_prep.sh
./scripts/single_process/prompt_gen.sh  # For deep learning only
./scripts/single_process/model_pred.sh
./scripts/single_process/evaluation.sh
./scripts/single_process/report.sh
```

For more usage details, please visit our GitHub.