4bpw exl2 quant of: https://huggingface.co/flammenai/Flammades-Mistral-Nemo-12B
Flammades-Mistral-Nemo-12B
nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B-v2 finetuned on flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1.
Method
ORPO tuned with 2x RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.34 |
IFEval (0-Shot) | 38.42 |
BBH (3-Shot) | 32.39 |
MATH Lvl 5 (4-Shot) | 6.19 |
GPQA (0-shot) | 7.16 |
MuSR (0-shot) | 20.31 |
MMLU-PRO (5-shot) | 29.57 |
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Jellon/Flammades-Mistral-Nemo-12B-exl2-4bpw
Base model
winglian/m12b-20240721-test010Datasets used to train Jellon/Flammades-Mistral-Nemo-12B-exl2-4bpw
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard38.420
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard32.390
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard6.190
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.160
- acc_norm on MuSR (0-shot)Open LLM Leaderboard20.310
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.570