Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ base_model:
|
|
5 |
---
|
6 |
|
7 |
A preview version of FuseChat-3.0, under testing...
|
8 |
-
Training configs:
|
9 |
```yaml
|
10 |
# Model arguments
|
11 |
model_name_or_path: AALF/FuseChat-Llama-3.1-8B-SFT
|
@@ -48,4 +48,22 @@ save_total_limit: 20
|
|
48 |
seed: 42
|
49 |
warmup_ratio: 0.1
|
50 |
save_only_model: true
|
51 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
7 |
A preview version of FuseChat-3.0, under testing...
|
8 |
+
## Training configs:
|
9 |
```yaml
|
10 |
# Model arguments
|
11 |
model_name_or_path: AALF/FuseChat-Llama-3.1-8B-SFT
|
|
|
48 |
seed: 42
|
49 |
warmup_ratio: 0.1
|
50 |
save_only_model: true
|
51 |
+
```
|
52 |
+
|
53 |
+
## Evaluation Results
|
54 |
+
| Datasets | Llama3.1-8B-Instruct | FuseChat-Llama-3.1-8B-SFT | FuseChat-Llama-3.1-8B-Instruct |
|
55 |
+
|---------------------------------|----------------------|---------------------------|--------------------------------|
|
56 |
+
| AlpacaEval-2 (LC/WR) | 28.3/28.7 | 41.3/37.7 | 65.4/63.3 |
|
57 |
+
| Arena-Hard (WR/SC) | 28.1/23.8 | 38.7/29 | 58.2/46.4 |
|
58 |
+
| MT-Bench | 8.38 | 8.54 | 9 |
|
59 |
+
| AlignBench v1.1 | 4.61 | 6.25 | 6.69 |
|
60 |
+
| LiveBench 0831 | 27.6 | 30.2 | 32 |
|
61 |
+
| GSM8K | 85.9 | 87 | 88 |
|
62 |
+
| MATH | 50.7 | 54.7 | 55.2 |
|
63 |
+
| AMC 23 | 25 | 30 | 37.5 |
|
64 |
+
| MMLU-Pro | 50 | 47.8 | 49.2 |
|
65 |
+
| MMLU-redux | 67.2 | 68.4 | 69.2 |
|
66 |
+
| GPQA-Diamond | 33.8 | 37.9 | 34.9 |
|
67 |
+
| HumanEval | 69.5 | 69.5 | 71.3 |
|
68 |
+
| MBPP | 75.4 | 71.4 | 72 |
|
69 |
+
| LiveCodeBench 2408-2411 (all/esay) | 12.3/40.5 | 12.6/39 | 13.1/43.2 |
|