--- license: cc-by-4.0 task_categories: - text-generation language: - en size_categories: - 1B` in the dataset can be accessed online at the url `https://arxiv.org/abs/` ## Dataset Size * `proofs` is 3,094,779,182 bytes (unzipped) and has 3,681,893 examples. * `sentences` is 3,545,309,822 bytes (unzipped) and has 38,899,132 examples. * `tags` is 7,967,839 bytes (unzipped) and has 328,642 rows. * `raw` is 3,178,997,379 bytes (unzipped) and has 3,681,903 examples. ## Dataset Statistics * The average length of `sentences` is 14.1 words. * The average length of `proofs` is 10.5 sentences. ## Dataset Usage Data can be downloaded as (zipped) TSV files. Accessing the data programmatically from Python is also possible using the `Datasets` library. For example, to print the first 10 proofs: ```python from datasets import load_dataset dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True') for d in dataset.take(10): print(d['paper'], d['proof']) ``` To look at individual sentences from the proofs, ```python dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True') for d in dataset.take(10): print(d['paper'], d['sentence']) ``` To get a comma-separated list of arXiv subject tags for each paper, ```python from datasets import load_dataset dataset = load_dataset('proofcheck/prooflang', 'tags', split='train', streaming='True') for d in dataset.take(10): print(d['paper'], d['tags']) ``` Finally, to look at a version of the proofs with less aggressive cleanup (straight from the LaTeX extraction), ```python dataset = load_dataset('proofcheck/prooflang', 'raw', split='train', streaming='True') for d in dataset.take(10): print(d['paper'], d['proof']) ``` ### Data Splits There is currently no train/test split; all the data is in `train`. ## Dataset Creation We started with the LaTeX source of 1.6M papers that were submitted to [arXiv.org](https://arXiv.org) between 1992 and April 2022. The proofs were extracted using a Python script simulating parts of LaTeX (including defining and expanding macros). It does no actual typesetting, throws away output not between `\begin{proof}...\end{proof}`, and skips math content. During extraction, * Math-mode formulas (signalled by `$`, `\begin{equation}`, etc.) become `MATH` * `\ref{...}` and variants (`autoref`, `\subref`, etc.) become `REF` * `\cite{...}` and variants (`\Citet`, `\shortciteNP`, etc.) become `CITE` * Words that appear to be proper names become `NAME` * `\item` becomes `CASE:` We then run a cleanup pass on the extracted proofs that includes * Cleaning up common extraction errors (e.g., due to uninterpreted macros) * Replacing more references by `REF`, e.g., `Theorem 2(a)` or `Postulate (*)` * Replacing more citations with `CITE`, e.g., `Page 47 of CITE` * Replacing more proof-case markers with `CASE:`, e.g., `Case (a).` * Fixing a few common misspellings ## Additional Information This dataset is released under the Creative Commons Attribution 4.0 licence. Copyright for the actual proofs remains with the authors of the papers on [arXiv.org](https://arXiv.org), but these simplified snippets are fair use under US copyright law.