Spaces:
Running
Running
File size: 3,414 Bytes
4e40fdd c305e53 82d0f19 c305e53 82d0f19 c305e53 5502f0b c305e53 82d0f19 a26881b 82d0f19 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
title: README
emoji: 🐠
colorFrom: gray
colorTo: indigo
sdk: static
pinned: false
---
Data and models accompanying the paper [When To Solve, When To Verify: Compute-Optimal Problem Solving and Generative Verification for LLM Reasoning](https://arxiv.org/abs/2504.01005), containing:
- Finetuned generative verifiers (i.e., GenRM-FT) for math reasoning.
- Synthetic verification data generated by GPT-4o for math reasoning to train your own generative verifiers.
- Solutions and verifications generated by various models for math and science reasoning.
# MATH Dataset
We use Llama-3.1-8B-Instruct and Qwen-2.5-7B-Instruct to generate solutions for problems in the training split of the [MATH dataset](https://huggingface.co/datasets/hendrycks/competition_math).
Then, we use GPT-4o to verify these solutions. We filter out the verifications whose verdict doesn't match the ground-truth correctness of the solution, and balance the dataset to have equal 'yes' and 'no' verifications in the dataset.
This results in these datasets:
## Training data for GenRM-FT
- Llama-3.1-8B-Instruct: https://huggingface.co/datasets/sc-genrm-scaling/genrm_gpt4o_verifs_llama_3p1_8b_solns_math_train
- Qwen-2.5.-7B-Instruct: https://huggingface.co/datasets/sc-genrm-scaling/genrm_gpt4o_verifs_qwen_2p5_7b_solns_math_train
We fine-tune the two models on their respective datasets using LoRA, resulting in these fine-tuned GenRMs:
## Finetuned Verifiers:
- Llama-3.1-8B-Instruct: https://huggingface.co/sc-genrm-scaling/llama_3.1_8b_genrm_ft
- Qwen-2.5.-7B-Instruct: https://huggingface.co/sc-genrm-scaling/qwen_2.5_7b_genrm_ft
You can follow [this example](https://github.com/nishadsinghi/sc-genrm-scaling/blob/master/llmonk/verify/demo.ipynb) of how to do inference with these models.
We use these generative verifiers (without fine-tuning in the case of Llama-3.3-70B-Instruct) on solutions from the MATH test set to obtain this data, which we analyse in the paper:
## Solutions and Verifications for Test-set
- Llama-3.1-8B-Instruct:
- Solutions: https://huggingface.co/datasets/sc-genrm-scaling/MATH128_Solutions_Llama-3.1-8B-Instruct
- Verifications (Finetuned Verifier): https://huggingface.co/datasets/sc-genrm-scaling/MATH128_verifications_GenRM-FT_Llama-3.1-8B-Instruct
- Llama-3.3-70B-Instruct:
- Solutions: https://huggingface.co/datasets/sc-genrm-scaling/MATH128_Solutions_Llama-3.3-70B-Instruct
- Verifications (*Without* Finetuning): https://huggingface.co/datasets/sc-genrm-scaling/MATH128_verifications_Llama-3.3-70B-Instruct_GenRM-Base
- Qwen-2.5-7B-Instruct:
- Solutions: https://huggingface.co/datasets/sc-genrm-scaling/MATH128_Solutions_Qwen-2.5-7B-Instruct
- Verifications (Finetuned Verifier): https://huggingface.co/datasets/sc-genrm-scaling/MATH128_verifications_GenRM-FT_Qwen-2.5-7B-Instruct
# AIME25
## Solutions and Verifications
- QwQ-32B:
- Solutions: https://huggingface.co/datasets/sc-genrm-scaling/AIME25_Solutions_QwQ-32B
- Verifications (*Without* Finetuning): https://huggingface.co/datasets/sc-genrm-scaling/AIME25_verifications_QwQ32B
# GPQA
## Solutions and Verifications
- Llama-3.3-70B-Instruct:
- Solutions: https://huggingface.co/datasets/sc-genrm-scaling/GPQA_diamond_Solutions_Llama-3.3-70B-Instruct
- Verifications (*Without* Finetuning): https://huggingface.co/datasets/sc-genrm-scaling/GPQA_verifications_GenRM-Base_Llama-3.3-70B-Instruct |