Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
Chinese
ArXiv:
Libraries:
Datasets
pandas
License:
weitianwen commited on
Commit
6d35cae
1 Parent(s): 402e262

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md CHANGED
@@ -1,5 +1,54 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  Visit our git [repository](https://github.com/XiaoMi/cmath) for more details.
5
  You may also read our [paper](https://arxiv.org/abs/2306.16636).
 
1
  ---
2
  license: cc-by-4.0
3
+ language:
4
+ - zh
5
+ tags:
6
+ - mathematics
7
+ size_categories:
8
+ - 1K<n<10K
9
  ---
10
+ # CMATH
11
+ ## Introduction
12
+ We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy >= 60%) across all six elementary school grades, while other models falter at different grade levels.
13
+ Furthermore, we assess the robustness of LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is the sole model that maintains robustness, further distinguishing its performance from competing models. We anticipate that our CMATH dataset will expose limitations in LLMs' capabilities and promote their ongoing development and advancement.
14
+
15
+ ## Datasets
16
+ ### cmath_dev
17
+ Initial release of 600 examples from CMATH dataset, with 100 problems from each elementary school grade.
18
+ We will release the remaining portion of the dataset by the end of the year.
19
+ #### Examples and Annotations
20
+ ![Examples](assets/example1.png)
21
+
22
+ #### Evaluation Results
23
+ ![Model Performance](assets/plot1.png)
24
+
25
+ ### distractor
26
+ To assess the robustness of LLMs against "irrelevant" information, we manually created a small ``distractor dataset'' comprising 60 examples, 10 for each grade level. Each example consists of an original problem and five associated problems augmented with 1 ~ 5 piece(s) of irrelevant information which we refer to as distractor(s).
27
+
28
+ #### Examples
29
+ ![Examples](assets/example2.png)
30
+
31
+ #### Evaluation Results
32
+ ![Model Performance](assets/plot2.png)
33
+
34
+ ## Script
35
+ We provide a script `eval.py` that implements automated evaluation.
36
+
37
+ ## License
38
+ MIT license
39
+
40
+ ## Citation
41
+ ```
42
+ @misc{wei2023cmath,
43
+ title={CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?},
44
+ author={Tianwen Wei and Jian Luan and Wei Liu and Shuang Dong and Bin Wang},
45
+ year={2023},
46
+ eprint={2306.16636},
47
+ archivePrefix={arXiv},
48
+ primaryClass={cs.CL}
49
+ }
50
+ ```
51
+
52
+
53
  Visit our git [repository](https://github.com/XiaoMi/cmath) for more details.
54
  You may also read our [paper](https://arxiv.org/abs/2306.16636).