Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
zhangirazerbayev commited on
Commit
fa03211
1 Parent(s): e2b7579

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -22,7 +22,9 @@ task_ids:
22
  ---
23
 
24
  Note: this repo is a WIP and does not yet implement all features described below. It is certainly not ready to be used to train a model.
25
- # Dataset description
 
 
26
  The `proof-pile` is a 45GB pre-training dataset of mathematical text. The dataset is composed of diverse sources of both informal and formal mathematics, namely
27
  - ArXiv.math (40GB)
28
  - Open-source math textbooks (50MB)
@@ -38,3 +40,12 @@ The `proof-pile` is a 45GB pre-training dataset of mathematical text. The datase
38
  - ProofWiki
39
  - Wikipedia math articles
40
  - MATH dataset (6MB)
 
 
 
 
 
 
 
 
 
 
22
  ---
23
 
24
  Note: this repo is a WIP and does not yet implement all features described below. It is certainly not ready to be used to train a model.
25
+ # Dataset Card for Proof-pile
26
+
27
+ # Dataset Description
28
  The `proof-pile` is a 45GB pre-training dataset of mathematical text. The dataset is composed of diverse sources of both informal and formal mathematics, namely
29
  - ArXiv.math (40GB)
30
  - Open-source math textbooks (50MB)
 
40
  - ProofWiki
41
  - Wikipedia math articles
42
  - MATH dataset (6MB)
43
+
44
+ # Supported Tasks
45
+ This dataset is intended to be used for pre-training language models. We envision models pre-trained on the `proof-pile` will have many downstream applications, including informal quantitative reasoning, formal theorem proving, semantic search for formal mathematics, and autoformalization.
46
+
47
+ # Languages
48
+ All informal mathematics in the `proof-pile` is written in English and LaTeX (arXiv articles in other languages are filtered out using [languagedetect](https://github.com/shuyo/language-detection/blob/wiki/ProjectHome.md)). Formal theorem proving languages represented in this dataset are Lean 3, Isabelle, Coq, HOL Light, Metamath, and Mizar.
49
+
50
+ # Splits
51
+ The data is sorted into `"arxiv", "books", "formal", "stack-exchange", "wiki",` and `"math-dataset"` configurations. This is so that it is easy to upsample particular configurations during pre-training with the `datasets.interleave_datasets()` function.