Update README.md
Browse files
README.md
CHANGED
@@ -22,25 +22,27 @@ model-index:
|
|
22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
23 |
should probably proofread and complete it, then remove this comment. -->
|
24 |
|
25 |
-
<img src="https://huggingface.co/AI-MO/Numina-Math-7B/
|
26 |
|
27 |
|
28 |
# Model Card for NuminaMath 7B
|
29 |
|
30 |
-
NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning. NuminaMath 7B won the first progress prize of the [AI Math Olympiad (AIMO)](https://aimoprize.com), with a score of 29/50 on the public and private tests sets.
|
|
|
|
|
|
|
|
|
31 |
|
32 |
* **Stage 1:** fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate learning.
|
33 |
* **Stage 2:** fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed [Microsoft’s ToRA paper](https://arxiv.org/abs/2309.17452) and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
|
34 |
|
35 |
|
36 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/NyhBs_gzg40iwL995DO9L.png)
|
37 |
-
|
38 |
|
39 |
## Model description
|
40 |
|
41 |
-
- **Model type:** A 7B parameter
|
42 |
- **Language(s) (NLP):** Primarily English
|
43 |
-
- **License:**
|
44 |
- **Finetuned from model:** [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base)
|
45 |
|
46 |
### Model Sources
|
@@ -80,8 +82,10 @@ print(text)
|
|
80 |
# Please refer to our full pipeline for a safer way to execute code.
|
81 |
python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
|
82 |
exec(python_code)
|
83 |
-
|
84 |
```
|
|
|
|
|
|
|
85 |
## Bias, Risks, and Limitations
|
86 |
|
87 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
|
|
22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
23 |
should probably proofread and complete it, then remove this comment. -->
|
24 |
|
25 |
+
<img src="https://huggingface.co/AI-MO/Numina-Math-7B/resolve/main/thumbnail.png" alt="Numina Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
26 |
|
27 |
|
28 |
# Model Card for NuminaMath 7B
|
29 |
|
30 |
+
NuminaMath is a series of language models that are trained to solve math problems using tool-integrated reasoning. NuminaMath 7B won the first progress prize of the [AI Math Olympiad (AIMO)](https://aimoprize.com), with a score of 29/50 on the public and private tests sets.
|
31 |
+
|
32 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6200d0a443eb0913fa2df7cc/NyhBs_gzg40iwL995DO9L.png)
|
33 |
+
|
34 |
+
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base) with two stages of supervised fine-tuning:
|
35 |
|
36 |
* **Stage 1:** fine-tune the base model on a large, diverse dataset of natural language math problems and solutions, where each solution is templated with Chain of Thought (CoT) to facilitate learning.
|
37 |
* **Stage 2:** fine-tune the model from Stage 1 on a synthetic dataset of tool-integrated reasoning, where each math problem is decomposed into a sequence of rationales, Python programs, and their outputs. Here we followed [Microsoft’s ToRA paper](https://arxiv.org/abs/2309.17452) and prompted GPT-4 to produce solutions in the ToRA format with code execution feedback. Fine-tuning on this data produces a reasoning agent that can solve mathematical problems via a mix of natural language reasoning and use of the Python REPL to compute intermediate results.
|
38 |
|
39 |
|
|
|
|
|
40 |
|
41 |
## Model description
|
42 |
|
43 |
+
- **Model type:** A 7B parameter math LLM fine-tuned in two stages of supervised fine-tuning, first on a dataset with math problem-solution pairs and then on a synthetic dataset with examples of multi-step generations using tool-integrated reasoning.
|
44 |
- **Language(s) (NLP):** Primarily English
|
45 |
+
- **License:** Apache 2.0
|
46 |
- **Finetuned from model:** [deepseek-ai/deepseek-math-7b-base](https://huggingface.co/deepseek-ai/deepseek-math-7b-base)
|
47 |
|
48 |
### Model Sources
|
|
|
82 |
# Please refer to our full pipeline for a safer way to execute code.
|
83 |
python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
|
84 |
exec(python_code)
|
|
|
85 |
```
|
86 |
+
|
87 |
+
In practice you will want to repeat the
|
88 |
+
|
89 |
## Bias, Risks, and Limitations
|
90 |
|
91 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|