hawei commited on
Commit
8c8ba18
·
verified ·
1 Parent(s): c4c3925

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -19
README.md CHANGED
@@ -88,29 +88,29 @@ The plot below highlights the alignment comparison of the model trained with Con
88
  ![Alignment Comparison](plots/alignment_comparison.png)
89
 
90
  ### Benchmark Results Table
91
- The table below summarizes the evaluation results across mathematical tasks and original capabilities.
92
 
93
- | **Model** | **MH** | **M** | **GSM8K** | **Math Avg.** | **ARC** | **GPQA** | **MMLU** | **MMLUP** | **Orig. Avg.** | **Overall** |
94
- |-------------------|-----------|----------|-----------|---------------|---------|----------|----------|-----------|----------------|-------------|
95
- | Llama3.1-8B-Inst | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 56.3 |
96
- | OpenMath2-Llama3 | 38.4 | 64.1 | 90.3 | 64.3 | 45.8 | 1.3 | 4.5 | 19.5 | 12.9 | 38.6 |
97
- | **Full Tune** | **38.5** | **63.7** | 90.2 | **63.9** | 58.2 | 1.1 | 7.3 | 23.5 | 16.5 | 40.1 |
98
- | Partial Tune | 36.4 | 61.4 | 89.0 | 61.8 | 66.2 | 6.0 | 25.7 | 30.9 | 29.3 | 45.6 |
99
- | Stack Exp. | 35.6 | 61.0 | 90.8 | 61.8 | 69.3 | 18.8 | 61.8 | 43.1 | 53.3 | 57.6 |
100
- | Hybrid Exp.n | 34.4 | 61.1 | 90.1 | 61.5 | **81.8**| **25.9** | 67.2 | **43.9** | 57.1 | 59.3 |
101
- | **Control LLM*** | 38.1 | 62.7 | **90.4** | 63.2 | 79.7 | 25.2 | **68.1** | 43.6 | **57.2** | **60.2** |
102
 
103
  ---
104
 
105
- ### Explanation of Metrics
106
  - **MH**: MathHard
107
- - **M**: Math - General math reasoning
108
- - **GSM8K**: Grade-school math
109
- - **Math Avg.**: Average performance across Math Hard, Math, and GSM8K
110
- - **ARC**: AI reasoning challenge
111
- - **GPQA**: General knowledge question answering
112
- - **MMLU**: Massive Multitask Language Understanding
113
- - **MMLUP**: MMLU (Professional subset)
114
- - **Orig. Avg.**: Average original capabilities' performance across ARC, GPQA, MMLU, and MMLU Pro
115
  - **Overall**: Combined average across all tasks
116
 
 
88
  ![Alignment Comparison](plots/alignment_comparison.png)
89
 
90
  ### Benchmark Results Table
91
+ The table below summarizes evaluation results across mathematical tasks and original capabilities.
92
 
93
+ | **Model** | **MH** | **M** | **G8K** | **M-Avg** | **ARC** | **GPQA** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
94
+ |-------------------|--------|--------|---------|-----------|---------|----------|---------|----------|-----------|-------------|
95
+ | Llama3.1-8B-Inst | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 56.3 |
96
+ | OpenMath2-Llama3 | 38.4 | 64.1 | 90.3 | 64.3 | 45.8 | 1.3 | 4.5 | 19.5 | 12.9 | 38.6 |
97
+ | **Full Tune** | **38.5**| **63.7**| 90.2 | **63.9** | 58.2 | 1.1 | 7.3 | 23.5 | 16.5 | 40.1 |
98
+ | Partial Tune | 36.4 | 61.4 | 89.0 | 61.8 | 66.2 | 6.0 | 25.7 | 30.9 | 29.3 | 45.6 |
99
+ | Stack Exp. | 35.6 | 61.0 | 90.8 | 61.8 | 69.3 | 18.8 | 61.8 | 43.1 | 53.3 | 57.6 |
100
+ | Hybrid Exp. | 34.4 | 61.1 | 90.1 | 61.5 | **81.8**| **25.9** | 67.2 | **43.9** | 57.1 | 59.3 |
101
+ | **Control LLM*** | 38.1 | 62.7 | **90.4**| 63.2 | 79.7 | 25.2 | **68.1**| 43.6 | **57.2** | **60.2** |
102
 
103
  ---
104
 
105
+ ### Explanation:
106
  - **MH**: MathHard
107
+ - **M**: Math
108
+ - **G8K**: GSM8K
109
+ - **M-Avg**: Math - Average across MathHard, Math, and GSM8K
110
+ - **ARC**: ARC benchmark
111
+ - **GPQA**: General knowledge QA
112
+ - **MLU**: MMLU (Massive Multitask Language Understanding)
113
+ - **MLUP**: MMLU Pro
114
+ - **O-Avg**: Orginal Capability - Average across ARC, GPQA, MMLU, and MMLUP
115
  - **Overall**: Combined average across all tasks
116