DPO finetune of our MetaMath SFT Model on the Truthy DPO dataset
Evaluation Results
Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
---|---|---|---|---|---|---|
75.54 | 69.20 | 84.34 | 76.46 | 67.58 | 82.87 | 72.78 |
- Downloads last month
- 71
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.