Datasets:
language:
- en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- text-generation
- question-answering
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
splits:
- name: train
num_bytes: 5373420.477987422
num_examples: 7273
- name: validation
num_bytes: 147763.5220125786
num_examples: 200
- name: test
num_bytes: 993169
num_examples: 1319
download_size: 3140154
dataset_size: 6514353
- config_name: original-splits
features:
- name: id
dtype: string
- name: question
dtype: string
- name: chain
dtype: string
- name: result
dtype: string
- name: result_float
dtype: float64
splits:
- name: train
num_bytes: 5521184
num_examples: 7473
- name: test
num_bytes: 993169
num_examples: 1319
download_size: 0
dataset_size: 6514353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: original-splits
data_files:
- split: train
path: original-splits/train-*
- split: test
path: original-splits/test-*
Dataset Card for "Calc-gsm8k"
Summary
This dataset is an instance of gsm8k dataset, converted to a simple html-like language that can be easily parsed (e.g. by BeautifulSoup). The data contains 3 types of tags:
- gadget: A tag whose content is intended to be evaluated by calling an external tool (sympy-based calculator in this case)
- output: An output of the external tool
- result: The final answer to the mathematical problem (a number)
Supported Tasks
The dataset is intended for training Chain-of-Thought reasoning models able to use external tools to enhance the factuality of their responses. This dataset presents in-context scenarios where models can outsource the computations in the reasoning chain to a calculator.
Construction Process
The answers in the original dataset were in a structured but non-standard format. So, the answers were parsed, all arithmetical expressions were evaluated using a sympy-based calculator, the outputs were checked to be consistent with the intermediate results and exported into a simple html-like language that BeautifulSoup can parse.
We also perform in-dataset and cross-dataset data-leak detection within the Calc-X collection However, in case of gsm8k, we found no data leaks and removed no examples from the data.
Content and Data splits
For convenience, we created a validation set by sampling 200 random examples from the original train split. This is the default variant:
datasets.load_dataset("MU-NLPC/Calc-gsm8k")
The original data splits can be loaded using:
datasets.load_dataset("MU-NLPC/Calc-gsm8k", "original-splits")
For more info about the content of the dataset, see gsm8k HF dataset and the official repository.
Related work
This dataset was created as a part of a larger effort in training models capable of using a calculator during inference, which we call Calcformers.
- Calc-X collection - datasets for training Calcformers
- Calcformers collection - calculator-using models we trained and published on HF
- Calc-X and Calcformers paper
- Calc-X and Calcformers repo
Here are links to the original dataset:
Licence
MIT, consistently with the original dataset.
Cite
If you use this version of the dataset in research, please cite the original GSM8K paper, and Calc-X collection as follows:
@inproceedings{kadlcik-etal-2023-soft,
title = "Calc-X and Calcformers: Empowering Arithmetical Chain-of-Thought through Interaction with Symbolic Systems",
author = "Marek Kadlčík and Michal Štefánik and Ondřej Sotolář and Vlastimil Martinek",
booktitle = "Proceedings of the The 2023 Conference on Empirical Methods in Natural Language Processing: Main track",
month = dec,
year = "2023",
address = "Singapore, Singapore",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2305.15017",
}