File size: 8,184 Bytes
883d5c4 aed8dc7 156a64e aed8dc7 19812ae 7a2cd0c 883d5c4 dd7c905 aed8dc7 dd7c905 038f7af 68c6dca 038f7af 0baaa25 038f7af dd7c905 41a2e63 dd7c905 6e76549 dd7c905 aed8dc7 dd7c905 d9a9a6e aed8dc7 dd7c905 d9a9a6e dd7c905 83cb1e7 dd7c905 b78f0ce dd7c905 aed8dc7 d9a9a6e b9b52c2 d9a9a6e b9b52c2 d9a9a6e b9b52c2 bec0313 be9fa1a 347d91f dd7c905 aed8dc7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
---
base_model:
- Qwen/Qwen2.5-7B-Instruct
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
datasets:
- MachineLearningLM/machinelearninglm-scm-synthetic-tabularml
tags:
- Tabular Classification
---
# MachineLearningLM
This repository contains the model presented in the paper [MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining](https://huggingface.co/papers/2509.06806).
## Model Summary
Can LLMs learn from 1,000 in-context examples?
Introducing **MachineLearningLM** π§ͺπ β a model continuously pretrained on millions of synthetic tabular ML tasks, enabling robust many-shot in-context learning.
π **Scales from 8 to 1,024 examples**
π β**β~15% improvementβ**β on unseen tabular tasks compared to o3-mini / GPT-5-mini / Qwen-2.5-7B-Instruct
π² β**βRandom-Forestβlevel numerical modeling robustnessβ**β
π§ β**βMMLU score: 75.4%β**β
π Read the paper: https://huggingface.co/papers/2509.06806
GitHub: https://github.com/HaoAreYuDong/MachineLearningLM
## Evaluation and Validation
We have developed an automated evaluation framework β simply configure the parameters to easily perform validation and evaluation.
**The code is now open-sourced at our [GitHub repository](https://github.com/HaoAreYuDong/MachineLearningLM).**
**Quick Start**
```bash
pip install -r requirements.txt
python ./src/evaluation/model_pred/dl_model_pred.py \
--input_dir ./demo_input.jsonl \
--output_dir ./demo_output.jsonl \
--model_name MachineLearningLM/MachineLearningLM-7B-v1
```
**Pipeline**
```bash
# modify the evaluate_parameters.sh file
source evaluate_parameters.sh
# Option 1 End-to-End Pipeline
./scripts/evaluate_pipeline.sh
# Option 2 Parallel Processing
./scripts/multi_process/data_prep.sh
./scripts/multi_process/prompt_gen.sh # For deep learning only
./scripts/multi_process/model_pred.sh
./scripts/multi_process/evaluation.sh
./scripts/multi_process/report.sh
# Option3 Sequential Processing
./scripts/single_process/data_prep.sh
./scripts/single_process/prompt_gen.sh # For deep learning only
./scripts/single_process/model_pred.sh
./scripts/single_process/evaluation.sh
./scripts/single_process/report.sh
```
For more usage details, please visit our GitHub.
**Quants of Checkpoints**
https://huggingface.co/mradermacher/MachineLearningLM-7B-v1-GGUF
https://huggingface.co/QuantFactory/MachineLearningLM-7B-v1-GGUF
## Tabicl Evaluation
**This part of the code needs to run in an environment with the tabicl and openpyxl libraries installed.**
The evaluation code for tabicl is placed separately in the `./src/evaluation/tabicl_evaluate.py` file. Use `./scripts/tabicl_evaluate.sh` to obtain the evaluation results for tabicl.
Use --datasets to specify the datasets to be evaluated, and --sample_sizes to indicate the number of shots.
If multiple datasets need to be evaluated, separate them with spaces. To evaluate all CSV files in the input folder, use **all**.
## Prior_data
MachineLearningLM uses the code from tabicl to generate prior data.
Use `./scripts/generate_data.sh` to generate the prior data. It generates the corresponding .pt and .csv files, and normalizes the feature values in the CSV files to the range of 0β999, as we did in the paper.
### Parameter IntroductionοΌrefer to the comments in the file `tabicl\src\tabicl\prior\dataset.py`οΌ
**Data Scale & Structure**
| Parameter | Type | Description |
| :------------- | :--- | :------------------------------------------------------ |
| `min_features` | int | Minimum number of features per dataset |
| `max_features` | int | Maximum number of features per dataset |
| `max_classes` | int | Maximum number of target classes |
| `min_seq_len` | int | Minimum samples per dataset. Uses `max_seq_len` if None |
| `max_seq_len` | int | Maximum samples per dataset οΌNot IncludeοΌ |
**Batch Configuration**
| Parameter | Type | Description |
| :--------------------- | :--- | :----------------------------------------------------------- |
| `batch_size` | int | Total number of datasets to generate per batch |
| `batch_size_per_gp` | int | Number of datasets per group (shared characteristics) |
| `batch_size_per_subgp` | int | Number of datasets per subgroup (similar causal structures). Defaults to `batch_size_per_gp` if None |
**Sequence Length Control**
| Parameter | Type | Description |
| :--------------- | :--- | :----------------------------------------------------------- |
| `log_seq_len` | bool | Sample sequence length from log-uniform distribution if True |
| `seq_len_per_gp` | bool | Sample sequence length per group (enables variable-sized datasets) |
| `replay_small` | bool | Occasionally sample smaller sequences for model robustness |
**Train-Test Split**
| Parameter | Type | Description |
| :--------------- | :-------- | :----------------------------------------------------------- |
| `min_train_size` | int/float | Start position/ratio for train split (int: absolute, float: fractional) |
| `max_train_size` | int/float | End position/ratio for train split (int: absolute, float: fractional) |
**Generation Method**
| Parameter | Type | Description |
| :----------- | :--- | :----------------------------------------------------------- |
| `prior_type` | str | Prior type: 'mlp_scm', 'tree_scm', or 'mix_scm' (random selection) |
| `fixed_hp` | dict | Fixed structural configuration parameters |
| `sampled_hp` | dict | Parameters sampled during generation |
**Computation Settings**
| Parameter | Type | Description |
| :------------------------- | :--- | :------------------------------------------------ |
| `n_jobs` | int | Number of parallel jobs (-1 = use all processors) |
| `num_threads_per_generate` | int | Number of threads per generation job |
| `device` | str | Computation device ('cpu' or 'cuda') |
## Train
MachineLearningLM uses the LLaMA-Factory framework for training.
#### Training Environment Configuration
```bash
cd ./third_party/LLaMA-Factory
pip install -e ".[torch,metrics]" --no-build-isolation
pip install wandb
```
Use `./scripts/train.sh` for training.
## Project Structure
```
MachineLearningLM/
βββsrc/
| βββevaluation/
β β βββ data_prep/ # Data preprocessing and chunking utilities
β β βββ prompt_gen/ # Prompt generation for deep learning models
β β βββ model_pred/ # Model inference (ML and DL prediction engines)
β β βββ result_proc/ # 5-layer evaluation architecture and metrics processing
β β βββ zero_summary/ # Result summarization and report generation
β β βββ tabicl_evaluate.py
β βββprior_data
β βββ pt_to_csv.py
βββ scripts/
β βββ single_process/ # Sequential execution shell scripts
β βββ multi_process/ # Parallel execution shell scripts (with _mp suffix)
β βββ evaluate_parameters.sh # Global parameter configuration
| βββ evaluate_pipeline.sh # automated pipeline
| βββ generate_data.sh
| βββ tabicl_evaluate.sh
| βββ train.sh
βββ datahub_inputs/
β βββ data_demo/ # Demo datasets for testing
β βββ data_raw/ # Raw input datasets
βββ third_party/
β βββ tabicl/
β βββ LLaMA-Factory/
βββ requirements.txt # Python dependencies for Evaluation Framework
βββ README.md
βββ README_zh.md
βββ THIRD_PARTY_NOTICES.md
βββ LICENSE
``` |