File size: 1,510 Bytes
446c491
 
 
c652501
da18a5d
63a78b9
c652501
 
 
446c491
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
base_model:
- deepseek-ai/DeepSeek-Prover-V1.5-SFT
datasets:
- kfdong/STP_Lean
- internlm/Lean-Workbook
license: mit
pipeline_tag: text-generation
library_name: transformers
---

This is the final Self-play Theorem Prover model as described in the paper [https://arxiv.org/abs/2502.00212](https://arxiv.org/abs/2502.00212). The training and evalution code is avaliable [here](https://github.com/kfdong/STP/tree/main).


```tex
@article{dong2025beyond,
  title={Beyond Limited Data: Self-play LLM Theorem Provers with Iterative Conjecturing and Proving},
  author={Dong, Kefan and Ma, Tengyu},
  journal={arXiv preprint arXiv:2502.00212},
  year={2025}
}
```

## 1. Evaluation Results

The table below compares the pass@3200 performance of STP (our model) and DeepSeek-Prover-V1.5 on miniF2F-test and ProofNet-test.

<div align="center">

|  | miniF2F-test | ProofNet-test |
|--------|------------------|------------------|
| **DeepSeek-Prover-V1.5-SFT** | 53.3% ± 0.5% | 21.0% ± 0.9% |
| **DeepSeek-Prover-V1.5-RL** | 54.9% ± 0.7% | 22.0% ± 0.5% |
| **STP** | **61.7% ± 0.6%** | **23.1% ± 0.5%** |

</div>

## 2. Dataset

We also release the dataset [here](https://huggingface.co/datasets/kfdong/STP_Lean), which contains:
- Extracted examples from mathlib4,
- Generated correct proofs of statements in LeanWorkbook, 
- Generated correct proofs of conjectures proposed by our model during self-play training. 

Our final model is finetuned from DeepSeek-Prover-V1.5-SFT with this dataset for 1 epoch.