Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ license: apache-2.0
|
|
7 |
|
8 |
## Summary
|
9 |
|
10 |
-
This dataset is an instance of math_qa dataset, converted to a simple
|
11 |
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
|
12 |
- output: An output of the external tool
|
13 |
- result: The final answer of the mathematical problem (a number)
|
@@ -16,7 +16,7 @@ This dataset is an instance of math_qa dataset, converted to a simple html-like
|
|
16 |
## Supported Tasks
|
17 |
|
18 |
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
|
19 |
-
This dataset presents in-context scenarios where models can
|
20 |
|
21 |
|
22 |
## Construction Process
|
@@ -24,7 +24,7 @@ This dataset presents in-context scenarios where models can out-source the compu
|
|
24 |
We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replace all advanced
|
25 |
function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their
|
26 |
evaluation does not match the answer selected as correct in the data with a 5% tolerance. The sequence of steps is then saved in HTML-like language
|
27 |
-
in `chain` column. We keep the original columns in the dataset for convenience.
|
28 |
|
29 |
You can read more information about this process in our [technical report](https://arxiv.org/abs/2305.15017).
|
30 |
|
@@ -34,6 +34,19 @@ You can read more information about this process in our [technical report](https
|
|
34 |
Content and splits correspond to the original math_qa dataset.
|
35 |
See [mathqa HF dataset](https://huggingface.co/datasets/math_qa) and [official website](https://math-qa.github.io/) for more info.
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## Licence
|
39 |
|
@@ -42,7 +55,7 @@ Apache 2.0, consistently with the original dataset.
|
|
42 |
|
43 |
## Cite
|
44 |
|
45 |
-
If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319),
|
46 |
|
47 |
```bibtex
|
48 |
@article{kadlcik2023calcx,
|
|
|
7 |
|
8 |
## Summary
|
9 |
|
10 |
+
This dataset is an instance of math_qa dataset, converted to a simple HTML-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
|
11 |
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
|
12 |
- output: An output of the external tool
|
13 |
- result: The final answer of the mathematical problem (a number)
|
|
|
16 |
## Supported Tasks
|
17 |
|
18 |
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
|
19 |
+
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
|
20 |
|
21 |
|
22 |
## Construction Process
|
|
|
24 |
We took the original math_qa dataset, parsed the nested formulas, linearized them into a sequence (chain) of operations, and replace all advanced
|
25 |
function calls (such as `circle_area`) with explicit elementary operations. We evaluate all the steps in each example and filter out examples if their
|
26 |
evaluation does not match the answer selected as correct in the data with a 5% tolerance. The sequence of steps is then saved in HTML-like language
|
27 |
+
in the `chain` column. We keep the original columns in the dataset for convenience.
|
28 |
|
29 |
You can read more information about this process in our [technical report](https://arxiv.org/abs/2305.15017).
|
30 |
|
|
|
34 |
Content and splits correspond to the original math_qa dataset.
|
35 |
See [mathqa HF dataset](https://huggingface.co/datasets/math_qa) and [official website](https://math-qa.github.io/) for more info.
|
36 |
|
37 |
+
Columns:
|
38 |
+
|
39 |
+
- `problem` - description of a mathematical problem in natural language
|
40 |
+
- `options` - dictionary with choices 'a' to 'e' as possible solutions
|
41 |
+
- `correct` - correct answer, one of 'a', ..., 'e'
|
42 |
+
- `rationale` - human-annotated free-text reasoning that leads to the correct answer
|
43 |
+
- `annotated_formula` - human-annotated nested expression that (approximately) evaluates to the selected correct answer
|
44 |
+
- `linear_formula` - same as `annotated_formula`, but linearized. Provided by original math_qa authors
|
45 |
+
- `chain` - linearized `annotated_formula`, provided by us. Converted to HTML-like language with expressions that can be evaluated using our sympy-based calculator
|
46 |
+
- `index` - index of the example in the original math_qa dataset
|
47 |
+
- `options_float` - same as 'options', but with simple parsing and evaluation applied to convert options to float. This is best-effort only - not all values are (or can be) extracted correctly
|
48 |
+
|
49 |
+
|
50 |
|
51 |
## Licence
|
52 |
|
|
|
55 |
|
56 |
## Cite
|
57 |
|
58 |
+
If you use this version of dataset in research, please cite the [original MathQA paper](https://arxiv.org/abs/1905.13319), and also [our technical report](https://arxiv.org/abs/2305.15017) as follows:
|
59 |
|
60 |
```bibtex
|
61 |
@article{kadlcik2023calcx,
|