You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for Premise Selection in Isabelle

Dataset Summary

The Isabelle Premise Selection Dataset is a collection of premise data mined from the Archive of Formal Proofs and the Standard Library of the interactive theorem prover Isabelle.

The dataset is designed for the task of premise selection, which involves selecting the most relevant premises (also called lemmas) for a given proof state. It includes over 4 million aligned pairs of proof context and relevant premises.

The dataset includes premises from the original proofs contained in the Archive of Formal Proofs and the Standard Library, as well as additional premises from proofs automatically generated using Isabelle's Sledgehammer tool. This tool employs symbolic methods for automatic premise selection. In effect, the dataset combines multiple sources, which increases its diversity.

The Isabelle Premise Selection Dataset can be a valuable resource for researchers and practitioners in the fields of automated theorem proving, automated reasoning and information retrieval.

Supported Tasks and Leaderboards

The Isabelle Premise Selection Dataset is designed for the task of premise selection, which involves selecting the most relevant premises for a given proof state. The typical setting for this task is a large developed library of formally encoded mathematical knowledge, over which mathematicians attempt to prove new lemmas and theorems. Read this for more background on Automated Theorem Proving, and this paper for Premise Selection.

The dataset has been used to train a model for premise selection using batch-contrastive learning in Magnushammer: A Transformer-based Approach to Premise Selection. The model achieved state-of-the-art (SOTA) performance on the PISA benchmark, achieving a proof rate of 71%. Additionally, the model achieved a very competitive performance of 37.3% on the miniF2F benchmark.

Languages

All information contained in this dataset is written in English and using the Isabelle syntax, which represents mathematical expressions using syntax similar to LaTeX.

Dataset Structure

Data Instances

{
  'statement':    'lemma congruence_congruence_inv [simp]:\n  assumes "mat_det M \\<noteq> 0"\n  shows "congruence M (congruence (mat_inv M) H) = H"'
  'state':	'proof (prove)\nusing this:\n  mat_det M \\<noteq> 0\n  congruence M (congruence (mat_inv M) H) =\n  congruence (mat_inv M *\\<^sub>m\\<^sub>m M) H\n\ngoal (1 subgoal):\n 1. congruence M (congruence (mat_inv M) H) = H'
  'step': 'by (metis mat_inv_1 mat_inv_def)'
  'premise_name': 'mat_inv_l'
  'premise_statement': 	' mat_inv_l: fixes A :: "complex \\<times> complex \\<times> complex \\<times> complex" assumes "mat_det A \\<noteq> 0" shows "mat_inv A *\\<^sub>m\\<^sub>m A = eye"'
}

Data Fields

  • statement: Description of the original statement of the problem, together with assumptions (if any)
  • state: The poof state describing the current goals of the proof; it may also contain type information of the objects referenced in the statement field
  • step: The proof step, which is a command that advances the proof. It may introduce definitions or conjectures, or solve the current conjecture. Note that including this field as part of the proof state representation results in a data leak, since premise names are always included in the step. This field should not be used for training premise selection models; read more here.
  • premise_name: The name of the ground-truth premise as it would appear in the proof library
  • premise_statement: The statement of the ground-truth premise

Note that this is not the only possible format (fields, number of premise per statement) for this dataset; the scripts used to generate this and other datasets (for e.g. proof step generation) are available on github.

Data Splits

We make the data available in a single file, but any train/val/test splitting is possible.

Curation Rationale

This dataset was created to facilitate the training of models for premise selection in Isabelle and potentially other Interactive Theorem Provers (Lean, Coq etc.). There were no existing datasets for Isabelle that used raw text format; prior existing datasets used a translation scheme to selected logics, making the engineering during inference much more involved.

Source Data

The dataset was created using the proofs included in the Archive of Formal Proofs and the Standard library included in the Isabelle 2021-1 distribution.

Known Limitations

The data included in this dataset is mostly untyped, meaning that there is little information about the objects referenced in the statement or premise statements. Adding type information would be a valuable contribution.

Licensing Information

This dataset is made available under the Apache License, Version 2.0. This license allows you to use, copy, and modify the dataset, as long as you comply with the terms and conditions of the license. You may also distribute the dataset, either in its original form or as a modified work, provided that you include the license terms with any distribution. There is no warranty for this dataset, and it is provided "as is". If you have any questions or concerns about the licensing or use of the dataset, please open an issue.

Citation

If you use this dataset in your research, please cite the associated arXiv paper: Magnushammer: A Transformer-based Approach to Premise Selection

Acknowledgements

We would like to express our gratitude to the following individuals and organizations for their contributions to this project:

  • We would like to acknowledge @jinpz for their contributions to the data mining aspect of this project. Their expertise and hard work greatly assisted us in achieving our project goals.

  • PISA API: We also want to thank the developers of the PISA API for creating a powerful tool that allowed us to interact with Isabelle through Python.

  • Google TRC Compute: Finally, we want to acknowledge Google's TPU Reasearch Cloud for providing compute necessary to develop the code infrastructure needed for the mining procedure.

We are grateful for the support and contributions of each of these individuals and organizations, and we would not have been able to accomplish this project without them.

Downloads last month
0
Edit dataset card