WizardLM commited on
Commit
c6326e9
•
1 Parent(s): caf588a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -25,7 +25,7 @@ pipeline_tag: text-generation
25
 
26
  ## News
27
 
28
- [12/19/2023] 🔥 We released **WizardMath-7B-V1.1**, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH.
29
 
30
  [12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
31
 
@@ -45,9 +45,9 @@ pipeline_tag: text-generation
45
  | ----- |------| ---- |
46
  | MPT-7B | 6.8 | 3.0 |
47
  |Llama 1-7B | 11.0 | 2.9 |
48
- |Llama 2-7b|12.3 |2.8 |
49
  |Yi-6b| 32.6 |5.8 |
50
- |Mistral-7b|37.8 |9.1 |
51
  |Qwen-7b|47.8 |9.3 |
52
  | RFT-7B | 50.3 | -- |
53
  | MAmmoTH-7B (COT) | 50.5 | 10.4 |
 
25
 
26
  ## News
27
 
28
+ [12/19/2023] 🔥 We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH.
29
 
30
  [12/19/2023] 🔥 **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
31
 
 
45
  | ----- |------| ---- |
46
  | MPT-7B | 6.8 | 3.0 |
47
  |Llama 1-7B | 11.0 | 2.9 |
48
+ |Llama 2-7B|12.3 |2.8 |
49
  |Yi-6b| 32.6 |5.8 |
50
+ |Mistral-7B|37.8 |9.1 |
51
  |Qwen-7b|47.8 |9.3 |
52
  | RFT-7B | 50.3 | -- |
53
  | MAmmoTH-7B (COT) | 50.5 | 10.4 |