Finetune of the pre-DPO Bagel model (https://huggingface.co/jondurbin/bagel-34b-v0.2) on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset

Evaluation Results

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K

For comparison the GSM8K score for the original metamath/MetaMath-Mistral-7B was 46.17 and average score was 69.7

Downloads last month
1,334
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for abacusai/MetaMath-bagel-34b-v0.2-c1500

Quantizations
1 model

Dataset used to train abacusai/MetaMath-bagel-34b-v0.2-c1500