metadata
dataset_info:
features:
- name: year
dtype: string
- name: id
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer_type
dtype: string
- name: source
dtype: string
- name: type
dtype: string
- name: original_problem
dtype: string
- name: original_solution
dtype: string
splits:
- name: full_original_236_10_30_2024
num_bytes: 682124
num_examples: 236
- name: func_original_53_10_30_2024
num_bytes: 80836
num_examples: 53
- name: func_variations_265_10_30_2024
num_bytes: 417491
num_examples: 265
download_size: 541080
dataset_size: 1180451
configs:
- config_name: default
data_files:
- split: full_original_236_10_30_2024
path: data/full_original_236_10_30_2024-*
- split: func_original_53_10_30_2024
path: data/func_original_53_10_30_2024-*
- split: func_variations_265_10_30_2024
path: data/func_variations_265_10_30_2024-*
extra_gated_prompt: >-
By requesting access to this dataset, you agree to cite the following works in
any publications or projects that utilize this data:
- Putnam-AXIOM dataset: @article{putnam_axiom2024, title={Putnam-AXIOM: A
Functional and Static Benchmark for Measuring Higher Level Mathematical
Reasoning}, author={Aryan Gulati and Brando Miranda and Eric Chen and Emily
Xia and Kai Fronsdal and Bruno de Moraes Dumont and Sanmi Koyejo},
journal={38th Conference on Neural Information Processing Systems (NeurIPS
2024) Workshop on MATH-AI}, year={2024},
url={https://openreview.net/pdf?id=YXnwlZe0yf}, note={Preprint available at:
https://openreview.net/pdf?id=YXnwlZe0yf}}
Putnam AXIOM Dataset
Dataset Summary
The Putnam AXIOM dataset is designed for evaluating large language models (LLMs) on advanced mathematical reasoning skills. It is based on challenging problems from the Putnam Mathematical Competition and contains three subsets:
- Full Original (236 problems): Contains 236 problems in their original form.
- Functional Original (53 problems): A selected subset of 53 original problems.
- Functional Variations (265 problems): Includes modified versions of problems to prevent memorization and encourage true mathematical understanding.
Each problem includes:
- Problem statement
- Solution
- Original problem variant (where applicable)
- Answer type (e.g., numerical, proof)
- Source and type of problem (e.g., Algebra, Calculus, Geometry)
Supported Tasks and Leaderboards
- Mathematical Reasoning: Evaluate mathematical reasoning and problem-solving skills.
- Language Model Benchmarking: Use this dataset to benchmark performance of language models on advanced mathematical questions.
Languages
The dataset is presented in English.
Dataset Structure
Data Fields
- year: The year of the competition.
- id: Unique identifier for each problem.
- problem: The problem statement.
- solution: The solution or explanation for the problem.
- answer_type: The expected type of answer (e.g., numerical, proof).
- source: The origin of the problem (Putnam).
- type: A description of the problem’s mathematical topic (e.g., "Algebra Geometry").
- original_problem: Original form of the problem, where variations exist.
- original_solution: Original solution to the problem, if modified in this dataset.
Subsets
Subset | Description | Number of Problems |
---|---|---|
full_original_236 |
Complete set of 236 original problems | 236 |
small_original_53 |
Subset of 53 original problems | 53 |
small_variations_265 |
Modified variations for evaluation | 265 |
Dataset Usage
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("brando/putnam-axiom-dataset")
# Access each subset
full_original = dataset["full_original_236"]
small_original = dataset["small_original_53"]
small_variations = dataset["small_variations_265"]
# Example usage: print the first problem from the full original subset
print(full_original[0])
Citation
If you use this dataset, please cite it as follows:
@article{fronsdal2024putnamaxiom,
title={Putnam-AXIOM: A Functional and Static Benchmark for Measuring Higher Level Mathematical Reasoning},
author={Kai Fronsdal and Aryan Gulati and Brando Miranda and Eric Chen and Emily Xia and Bruno de Moraes Dumont and Sanmi Koyejo},
journal={NeurIPS 2024 Workshop on MATH-AI},
year={2024},
month={October},
url={https://openreview.net/pdf?id=YXnwlZe0yf},
note={Published: 09 Oct 2024, Last Modified: 09 Oct 2024},
keywords={Benchmarks, Large Language Models, Mathematical Reasoning, Mathematics, Reasoning, Machine Learning},
abstract={As large language models (LLMs) continue to advance, many existing benchmarks designed to evaluate their reasoning capabilities are becoming less challenging. These benchmarks, though foundational, no longer offer the complexity necessary to evaluate the cutting edge of artificial reasoning. In this paper, we present the Putnam-AXIOM Original benchmark, a dataset of 236 challenging problems from the William Lowell Putnam Mathematical Competition, along with detailed step-by-step solutions. To address the potential data contamination of Putnam problems, we create functional variations for 53 problems in Putnam-AXIOM. We see that most models get a significantly lower accuracy on the variations than the original problems. Even so, our results reveal that Claude-3.5 Sonnet, the best-performing model, achieves 15.96% accuracy on the Putnam-AXIOM original but experiences more than a 50% reduction in accuracy on the variations dataset when compared to its performance on corresponding original problems.},
license={Apache 2.0}
}
License
This dataset is licensed under the Apache 2.0.