Datasets:
File size: 4,958 Bytes
6f590b8 035424f 2fe7624 035424f 2fe7624 13b914e 59eb068 2e8ce75 59eb068 2e8ce75 59eb068 2e8ce75 59eb068 2e8ce75 59eb068 2e8ce75 59eb068 13b914e 9de2b95 13b914e 9de2b95 13b914e 9de2b95 13b914e 26e8c9b 9de2b95 13b914e 59eb068 13b914e 6f590b8 035424f e57df94 035424f e57df94 035424f e57df94 035424f 8078537 035424f 8078537 b3c20ae e57df94 b3c20ae e57df94 b3c20ae a52c23e e57df94 035424f 8078537 035424f e57df94 e932468 16d9d25 8078537 16d9d25 e932468 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
- question-answering
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
splits:
- name: train
num_bytes: 5373420.477987422
num_examples: 7273
- name: validation
num_bytes: 147763.5220125786
num_examples: 200
- name: test
num_bytes: 993169
num_examples: 1319
download_size: 3140154
dataset_size: 6514353.0
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
splits:
- name: train
num_bytes: 5521184
num_examples: 7473
- name: test
num_bytes: 993169
num_examples: 1319
download_size: 0
dataset_size: 6514353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: test
path: original-splits/test-*
---
# Dataset Card for "Calc-gsm8k"
## Summary
This dataset is an instance of gsm8k dataset, converted to a simple html-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
## Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning **models able to use external tools** to enhance the factuality of their responses.
This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
## Construction Process
The answers in the original dataset were in a structured but non-standard format. So, the answers were parsed, all arithmetical expressions
were evaluated using a sympy-based calculator, the outputs were checked to be consistent with the intermediate results and exported
into a simple html-like language that BeautifulSoup can parse.
We also perform in-dataset and cross-dataset data-leak detection within the [Calc-X collection](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483)
However, in case of gsm8k, we found no data leaks and removed no examples from the data.
## Content and Data splits
For convenience, we created a validation set by sampling 200 random examples from the original train split. This is the default variant:
```python3
datasets.load_dataset("MU-NLPC/Calc-gsm8k")
```
The original data splits can be loaded using:
```python3
datasets.load_dataset("MU-NLPC/Calc-gsm8k", "original-splits")
```
For more info about the content of the dataset, see [gsm8k HF dataset](https://huggingface.co/datasets/gsm8k) and the [official repository](https://github.com/openai/grade-school-math).
## Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- [**Calc-X collection**](https://huggingface.co/collections/MU-NLPC/calc-x-652fee9a6b838fd820055483) - datasets for training Calcformers
- [**Calcformers collection**](https://huggingface.co/collections/MU-NLPC/calcformers-65367392badc497807b3caf5) - calculator-using models we trained and published on HF
- [**Calc-X and Calcformers paper**](https://arxiv.org/abs/2305.15017)
- [**Calc-X and Calcformers repo**](https://github.com/prompteus/calc-x)
Here are links to the original dataset:
- [**original gsm8k dataset**](https://huggingface.co/datasets/gsm8k)
- [**original gsm8k paper**](https://arxiv.org/abs/2110.14168)
- [**original gsm8k repo**](https://github.com/openai/grade-school-math)
## Licence
MIT, consistently with the original dataset.
## Cite
If you use this version of the dataset in research, please cite the [original GSM8K paper](https://arxiv.org/abs/2110.14168), and [Calc-X collection](https://arxiv.org/abs/2305.15017) as follows:
```bibtex
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}
``` |