Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
proof-pile / README.md
zhangirazerbayev's picture
Update README.md
c2741e1
|
raw
history blame
2.44 kB
metadata
annotations_creators:
  - no-annotation
language:
  - en
language_creators:
  - found
license: []
multilinguality:
  - monolingual
pretty_name: proof-pile
size_categories: []
source_datasets: []
tags:
  - math
  - mathematics
  - formal-mathematics
task_categories:
  - text-generation
task_ids:
  - language-modeling

Note: this repo is a WIP and does not yet implement all features described below. It is certainly not ready to be used to train a model.

Dataset Card for Proof-pile

Dataset Description

The proof-pile is a 45GB pre-training dataset of mathematical text. The dataset is composed of diverse sources of both informal and formal mathematics, namely

  • ArXiv.math (40GB)
  • Open-source math textbooks (50MB)
  • Formal mathematics libraries (500MB)
    • Lean mathlib and other Lean repositories
    • Isabelle AFP
    • Coq mathematical components and other Coq repositories
    • HOL Light
    • set.mm
    • Mizar Mathematical Library
  • Math Overflow and Math Stack Exchange (500MB)
  • Wiki-style sources (50MB)
    • ProofWiki
    • Wikipedia math articles
  • MATH dataset (6MB)

Supported Tasks

This dataset is intended to be used for pre-training language models. We envision models pre-trained on the proof-pile will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.

Languages

All informal mathematics in the proof-pile is written in English and LaTeX (arXiv articles in other languages are filtered out using languagedetect). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.

Splits

The data is sorted into "arxiv", "books", "formal", "stack-exchange", "wiki", and "math-dataset" configurations. This is so that it is easy to upsample particular configurations during pre-training with the datasets.interleave_datasets() function.

Note that in the "stack-exchange", "wiki", and "stack-exchange" configurations, multiple documents are included in them same instance separated by the string "<|endoftext|>".

Contributions

Authors: Zhangir Azerbayev, Edward Ayers, Bartosz Piotrowski.

We would like to thank Jeremy Avigad for his invaluable perspective and guidance, and the Hoskinson Center for Formal Mathematics for its support.