game-of-24 / README.md
karantonis's picture
Add Game of 24 dataset from Tree of Thoughts paper
1d5eab4 verified
metadata
dataset_info:
  features:
    - name: Rank
      dtype: int64
    - name: Puzzles
      dtype: string
    - name: AMT (s)
      dtype: float64
    - name: Solved rate
      dtype: string
    - name: 1-sigma Mean (s)
      dtype: float64
    - name: 1-sigma STD (s)
      dtype: float64
  splits:
    - name: train
      num_examples: 1362
task_categories:
  - question-answering
  - text-generation
language:
  - en
tags:
  - mathematical-reasoning
  - tree-of-thoughts
  - test-time-compute
  - game-of-24
size_categories:
  - 1K<n<10K
license: mit

Game of 24 Dataset

Dataset Description

The Game of 24 is a mathematical reasoning puzzle where players must use four numbers and basic arithmetic operations (+, -, *, /) to obtain the result 24. Each number must be used exactly once.

This dataset contains 1,361 unique Game of 24 puzzles ranked by difficulty based on human performance from Amazon Mechanical Turk studies.

Example

Input: 4 5 6 10

Output: (5 * (10 - 4)) - 6 = 24

Step-by-step solution:

10 - 4 = 6 (left: 5 6 6)
5 * 6 = 30 (left: 6 30)
30 - 6 = 24 (left: 24)
Answer: (5 * (10 - 4)) - 6 = 24

Dataset Structure

Data Fields

  • Rank: Difficulty ranking (1 = easiest, 1361 = hardest)
  • Puzzles: Four numbers separated by spaces (e.g., "4 5 6 10")
  • AMT (s): Average time for humans to solve (seconds)
  • Solved rate: Percentage of humans who successfully solved the puzzle
  • 1-sigma Mean (s): Mean solving time within 1 standard deviation
  • 1-sigma STD (s): Standard deviation of solving time

Data Splits

The original Tree of Thoughts paper uses indices 900-1000 (100 puzzles) for evaluation.

  • Easy puzzles (rank 1-300): 95-99% human solve rate, ~4-6 seconds
  • Medium puzzles (rank 300-900): 85-95% human solve rate, ~6-10 seconds
  • Hard puzzles (rank 900-1100): 80-90% human solve rate, ~10-15 seconds
  • Very hard puzzles (rank 1100-1361): 20-80% human solve rate, 15-200+ seconds

Source Data

This dataset is from the official Tree of Thoughts (ToT) repository:

Human Performance Collection

Puzzles were ranked based on human performance data collected via Amazon Mechanical Turk, measuring:

  • Success rate (percentage of correct solutions)
  • Solving time (average time to solution)

Usage

Loading the Dataset

from datasets import load_dataset

dataset = load_dataset("test-time-compute/game-of-24")

Paper Evaluation Subset

To replicate the Tree of Thoughts paper evaluation:

# Load indices 900-1000 (100 hard puzzles)
eval_subset = dataset['train'].select(range(900, 1000))

Task Format

Each puzzle requires:

  1. Input: Four numbers (e.g., "4 5 6 10")
  2. Output: A valid mathematical expression using each number exactly once that equals 24
  3. Verification: Check that all four numbers are used and the expression evaluates to 24

Benchmark Results

From the Tree of Thoughts paper (indices 900-1000):

Method Success Rate Model
IO (100 samples) 7.3% GPT-4
CoT (100 samples) 4.0% GPT-4
ToT (b=5) 74.0% GPT-4

Where:

  • IO: Input-output prompting with 100 samples
  • CoT: Chain-of-thought prompting with 100 samples
  • ToT: Tree of Thoughts with beam width 5

Citation

If you use this dataset, please cite the original Tree of Thoughts paper:

@article{yao2023tree,
  title={Tree of Thoughts: Deliberate Problem Solving with Large Language Models},
  author={Yao, Shunyu and Yu, Dian and Zhao, Jeffrey and Shafran, Izhak and Griffiths, Thomas L and Cao, Yuan and Narasimhan, Karthik},
  journal={arXiv preprint arXiv:2305.10601},
  year={2023}
}

License

MIT License (same as original Tree of Thoughts repository)