Through PPO using a reward model trained on merely 5% of the LeanStatement_RL dataset, our model achieved enhanced performance, reaching a pass@1 compilation success rate of 95.3% (465/488) on MiniF2F and 76.0% (284/374) on ProofNet benchmarks. Combined with our SFT results, these findings establish a new state-of-the-art in mathematics formalization as of December 2024.
- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.