Text Generation
Transformers
PyTorch
mistral
text-generation-inference
Inference Endpoints
Longhui98 commited on
Commit
c4a2bb7
1 Parent(s): 016a7bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -8,6 +8,13 @@ see our paper in https://arxiv.org/abs/2309.12284
8
  View the project page:
9
  https://meta-math.github.io/
10
 
 
 
 
 
 
 
 
11
  ## Model Details
12
 
13
  MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.
 
8
  View the project page:
9
  https://meta-math.github.io/
10
 
11
+ ## Note
12
+
13
+ All MetaMathQA data are augmented from the training sets of GSM8K and MATH.
14
+ <span style="color:red"><b>None of the augmented data is from the testing set.</b></span>
15
+
16
+ You can check the `original_question` in `meta-math/MetaMathQA`, each item is from the GSM8K or MATH train set.
17
+
18
  ## Model Details
19
 
20
  MetaMath-Mistral-7B is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-7B model. It is glad to see using MetaMathQA datasets and change the base model from llama-2-7B to Mistral-7b can boost the GSM8K performance from 66.5 to **77.7**.