Update README.md
Browse files
README.md
CHANGED
@@ -25,18 +25,18 @@ pipeline_tag: text-generation
|
|
25 |
|
26 |
## News
|
27 |
|
28 |
-
[12/19/2023] π₯ We released **WizardMath-7B-V1.1**
|
29 |
|
30 |
[12/19/2023] π₯ **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
|
31 |
|
32 |
[12/19/2023] π₯ **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1.
|
33 |
|
34 |
-
| Model | Checkpoint | Paper | GSM8k | MATH |
|
35 |
-
| ----- |------| ----
|
36 |
-
| **WizardMath-7B-V1.1** | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |
|
37 |
-
| WizardMath-70B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7**
|
38 |
-
| WizardMath-13B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0**
|
39 |
-
| WizardMath-7B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** |
|
40 |
|
41 |
|
42 |
## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs.
|
|
|
25 |
|
26 |
## News
|
27 |
|
28 |
+
[12/19/2023] π₯ We released **WizardMath-7B-V1.1** trained from Mistral-7B, the **SOTA 7B math LLM**, achieves **83.2 pass@1** on GSM8k, and **33.0 pass@1** on MATH. Use this [[**Demo**](http://47.103.63.15:50083/)] to chat with it.
|
29 |
|
30 |
[12/19/2023] π₯ **WizardMath-7B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, **Mixtral MOE**, and **Claude Instant** on GSM8K pass@1.
|
31 |
|
32 |
[12/19/2023] π₯ **WizardMath-7B-V1.1** is comparable with **ChatGPT 3.5**, **Gemini Pro**, and surpasses **Mixtral MOE** on MATH pass@1.
|
33 |
|
34 |
+
| Model | Checkpoint | Paper | GSM8k | MATH | Demo|
|
35 |
+
| ----- |------| ---- |------|-------|-------|
|
36 |
+
| **WizardMath-7B-V1.1** | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **83.2** | **33.0** |[[**Demo**](http://47.103.63.15:50083/)] |
|
37 |
+
| WizardMath-70B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** ||
|
38 |
+
| WizardMath-13B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** ||
|
39 |
+
| WizardMath-7B-V1.0 | π€ <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | π <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | |
|
40 |
|
41 |
|
42 |
## [12/19/2023] Comparing WizardMath-7B-V1.1 with other open source 7B size math LLMs.
|