This is the final Self-play Theorem Prover model as described in the paper https://arxiv.org/abs/2502.00212. The training and evalution code is avaliable here.
@article{dong2025beyond,
title={Beyond Limited Data: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving},
author={Dong, Kefan and Ma, Tengyu},
journal={arXiv preprint arXiv:2502.00212},
year={2025}
}
1. Evaluation Results
The table below compares the pass@3200 performance of STP (our model) and DeepSeek-Prover-V1.5 on miniF2F-test and ProofNet-test.
miniF2F-test | ProofNet-test | |
---|---|---|
DeepSeek-Prover-V1.5-SFT | 53.3% ± 0.5% | 21.0% ± 0.9% |
DeepSeek-Prover-V1.5-RL | 54.9% ± 0.7% | 22.0% ± 0.5% |
STP | 61.7% ± 0.6% | 23.1% ± 0.5% |
2. Dataset
We also release the dataset here, which contains:
- Extracted examples from mathlib4,
- Generated correct proofs of statements in LeanWorkbook,
- Generated correct proofs of conjectures proposed by our model during self-play training.
Our final model is finetuned from DeepSeek-Prover-V1.5-SFT with this dataset for 1 epoch.
- Downloads last month
- 115
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.
Model tree for kfdong/STP_model_Lean
Base model
deepseek-ai/DeepSeek-Prover-V1.5-Base
Finetuned
deepseek-ai/DeepSeek-Prover-V1.5-SFT