Evaluation

#1
by tugstugi - opened

I can't reproduce the results. What were the generation parameters used? temperature, top_p etc.

Hello! This is our code for inference using vllm for aime dataset:

from vllm import LLM, SamplingParams
model_path = <your model path>
def generate_sample_batch(question_list):
    llm = LLM(
        model=model_path,    # the model path
        trust_remote_code=True,
        tensor_parallel_size=torch.cuda.device_count(),
        gpu_memory_utilization=0.80,
    )
    sampling_params = SamplingParams(max_tokens=4096,
                                     temperature=0,
                                     stop=["\n###\nProblem: ", "<|eot_id|>"], )
    outputs = llm.generate(question_list, sampling_params, use_tqdm=True)
    completions = [output.outputs[0].text for output in outputs]
    return completions

def make_conv_hf(question, tokenizer):
    system_prompt = "\nWhen tackling complex reasoning tasks, you have access to the following actions. Use them as needed to progress through your thought process.\n\n[ASSESS]\n\n[ADVANCE]\n\n[VERIFY]\n\n[SIMPLIFY]\n\n[SYNTHESIZE]\n\n[PIVOT]\n\n[OUTPUT]\n\nYou should strictly follow the format below:\n\n[ACTION NAME]\n\n# Your action step 1\n\n# Your action step 2\n\n# Your action step 3\n\n...\n\nNext action: [NEXT ACTION NAME]\n"
    content = question + "\n\nPresent the answer in LaTex format: \\boxed{Your answer}"
    msg = [
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": content}
    ]
    chat = tokenizer.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
    return chat

tokenizer = AutoTokenizer.from_pretrained(model_path)
all_problems = []    #all_problems should be a list like [questions,question2,...]
completions = generate_sample_batch(
            [make_conv_hf(problem_data, tokenizer) for problem_data in all_problems])   

We use 2×80G A800 GPUs.

PRIME org

Hello, there is a little supplement, we test scripts from Eurus (https://github.com/OpenBMB/Eurus)

Hope it helps.

Still can't reproduce with the above script. I am evaluating on https://huggingface.co/datasets/AI-MO/aimo-validation-aime which contains also AIME24. It solves only 11 of 90 problems which means only 12% . Did you evaluated with the model weights uploaded here? Maybe something wrong during model upload?

PRIME org

Hi, we just upload the eval script to the GitHub repository (https://github.com/PRIME-RL/PRIME) and will merge soon. We also test the model and the results are as follows:
{
"2024_AIME_I_Problems": {
"total": 15,
"success": 5
},
"2023_AIME_I_Problems": {
"total": 15,
"success": 1
},
"2023_AIME_II_Problems": {
"total": 15,
"success": 2
},
"2022_AIME_I_Problems": {
"total": 15,
"success": 2
},
"2024_AIME_II_Problems": {
"total": 15,
"success": 3
},
"2022_AIME_II_Problems": {
"total": 15,
"success": 1
}
}

AIME ALL-total: 90, success: 14, rate: 0.15555555555555556

AIME2024-total: 30, success: 8, rate: 0.26666666666666666

PRIME org

Still can't reproduce with the above script. I am evaluating on https://huggingface.co/datasets/AI-MO/aimo-validation-aime which contains also AIME24. It solves only 11 of 90 problems which means only 12% . Did you evaluated with the model weights uploaded here? Maybe something wrong during model upload?

hanbin changed discussion status to closed
hanbin changed discussion status to open

So what does it mean? AIME 24 is easier than AIME 22/23 or the model is overfitting on AIME 24?

Sign up or log in to comment