Datasets:
GAIR
/

Languages:
English
Size Categories:
1B<n<10B
ArXiv:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

By using this data, you agree to comply with the original usage licenses of all sources contributing to MathPile_Commercial. The MathPile_Commercial is governed by the CC BY-SA 4.0 license. Access to this dataset is granted automatically once you accept the license terms and complete all the required fields below.

Log in or Sign Up to review the conditions and access this dataset content.


🔥Update:

  • [2024/01/06] We released the commercial-use version of MathPile, namely MathPile_Commercial.

Dataset Card for Dataset Name

MathPile_Commercial is a commercial-use version of MathPile, obtained by culling documents that are prohibited from commercial use in the MathPile (latest version, i.e., v0.2). Specifically, we conducted a non-commercial use detection in the source data, utilizing the license information in the metadata for arXiv sources and employing keyword matching for other sources. As a result, we have excluded approximately 8,000 documents from the latest version of MathPile, comprising 7,350 from arXiv, 518 from Creative Commons sources, 68 from textbooks, and 8 from Wikipedia. This version of the dataset contains around 9.2 billion tokens.

MathPile is a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens, which is significantly different from the previous work in the following characteristics:

  • Math-centric: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath.

  • Diversity: MathPile draws from a wide range of sources: Textbooks (including lecture notes), arXiv, Wikipedia, ProofWiki, StackExchange, and Web Pages. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).

  • High-Quality: We adhered to the principle of less is more, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus.

  • Data Documentation: To enhance transparency, we've extensively documented MathPile. This includes a dataset sheet (see Table 5 in our paper) and quality annotations for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed data contamination detection to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM.

Dataset Details

Refer to Appendix A in our paper for the MathPile Dataset Sheet.

How to download MathPile?

Currently, we recommend that you download it locally from the command line (such as huggingface-cli) instead of the python function load_dataset("GAIR/MathPile") (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows

$ huggingface-cli download --resume-download --repo-type dataset GAIR/MathPile --local-dir /your/path/ --local-dir-use-symlinks False

$ cd /your/path/
$ find . -type f -name "*.gz" -exec gzip -d {} \;

Later we will also support the datasets loading via load_dataset("GAIR/MathPile"). Stay tuned.

Dataset Description

  • Curated by: GAIR Lab, SJTU
  • Funded by [optional]: GAIR Lab, SJTU
  • Language(s) (NLP): English
  • License: CC BY-SA 4.0

Dataset Sources

Uses

Direct Use

To develop mathematical language models.

Out-of-Scope Use

This dataset may be not suitable for scenarios unrelated to mathematics or reasoning.

Dataset Structure

{
    "text": ...,
    "SubSet": "CommomCrawl" | "StackExchange" | "Textbooks" | "Wikipedia" | "ProofWiki" | "arXiv"
    "meta": {"language_detection_score": , "idx": , "contain_at_least_two_stop_words": ,
}

Dataset Creation

Curation Rationale

To create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models.

Source Data

Data Collection and Processing

We sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and gather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication, all aimed at maintaining the high quality of the corpus. Please see our paper for more details.

Annotations

We provided quantity annotations (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers the flexibility to filter the data according to their criteria, tailoring it to their specific needs.

Personal and Sensitive Information

The corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds.

Bias, Risks, and Limitations

  • The decisions made during the data collection and processing phases might not always be optimal.
  • Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset.

Citation

If you find our work useful or use MathPile, please cite our paper:

@article{wang2023mathpile,
  title={Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math},
  author={Wang, Zengzhi and Xia, Rui and Liu, Pengfei},
  journal={arXiv preprint arXiv:2312.17120},
  year={2023}
}

Dataset Card Authors

Zengzhi Wang

Dataset Card Contact

stefanpengfei@gmail.com, zzwang.nlp@gmail.com

Downloads last month
34
Edit dataset card

Models trained or fine-tuned on GAIR/MathPile_Commercial