Datasets:
ZhouChuYue
commited on
Commit
·
0fde1e0
1
Parent(s):
4a41aee
Update README: Simplify benchmark list and unify table headers
Browse files
README.md
CHANGED
|
@@ -132,9 +132,9 @@ Natural web data is mostly declarative text. To enhance the model's instruction
|
|
| 132 |
|
| 133 |
We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, using the **Decay Verification** method (annealing from a 1.3T base model). We used the Lighteval library for model evaluation. Evaluation benchmarks include:
|
| 134 |
|
| 135 |
-
- **Mathematical Reasoning:** GSM8K
|
| 136 |
-
- **Code Generation:** HumanEval
|
| 137 |
-
- **Comprehensive Knowledge:** MMLU
|
| 138 |
|
| 139 |
### 🔧 Experimental Setup
|
| 140 |
|
|
@@ -169,7 +169,7 @@ To validate the effectiveness of our L0-L3 hierarchical framework, we conducted
|
|
| 169 |
|
| 170 |
We used a single dataset for independent training to directly compare the effects of different data sources:
|
| 171 |
|
| 172 |
-
| Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 174 |
| **UltraData-Math (Ours)** | **43.79** | 51.67 | 45.93 | **37.02** | **61.79** | **49.27** | 32.93 | 23.38 | **48.33** |
|
| 175 |
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|
|
|
|
| 132 |
|
| 133 |
We used the **MiniCPM-1.2B** model architecture and **MiniCPM3-4B** tokenizer for experimental verification. Each experiment was conducted with a training volume of **100 billion Tokens**, using the **Decay Verification** method (annealing from a 1.3T base model). We used the Lighteval library for model evaluation. Evaluation benchmarks include:
|
| 134 |
|
| 135 |
+
- **Mathematical Reasoning:** GSM8K, MATH, Math-Bench, R-Bench-Math
|
| 136 |
+
- **Code Generation:** HumanEval, MBPP
|
| 137 |
+
- **Comprehensive Knowledge:** MMLU, MMLU-STEM
|
| 138 |
|
| 139 |
### 🔧 Experimental Setup
|
| 140 |
|
|
|
|
| 169 |
|
| 170 |
We used a single dataset for independent training to directly compare the effects of different data sources:
|
| 171 |
|
| 172 |
+
| Model | Average | MMLU | MMLU-STEM | Math | GSM8K | MBPP | HumanEval | R-Bench-Math | Math-Bench |
|
| 173 |
|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
| 174 |
| **UltraData-Math (Ours)** | **43.79** | 51.67 | 45.93 | **37.02** | **61.79** | **49.27** | 32.93 | 23.38 | **48.33** |
|
| 175 |
| Nemotron-cc 4plus mind | 43.45 | 52.09 | 45.99 | 35.96 | 59.97 | 48.03 | 34.76 | **23.51** | 47.25 |
|