Safetensors
English
qwen2

Through PPO using a reward model trained on merely 5% of the LeanStatement_RL dataset, our model achieved enhanced performance, reaching a pass@1 compilation success rate of 95.3% (465/488) on MiniF2F and 76.0% (284/374) on ProofNet benchmarks. Combined with our SFT results, these findings establish a new state-of-the-art in mathematics formalization as of December 2024.

Downloads last month
12
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for SJTULean/LeanFormalizer_PPO

Base model

Qwen/Qwen2.5-7B
Finetuned
(300)
this model

Datasets used to train SJTULean/LeanFormalizer_PPO