Uploaded model
- Developed by: theprint
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 14.72 |
IFEval (0-Shot) | 26.08 |
BBH (3-Shot) | 25.71 |
MATH Lvl 5 (4-Shot) | 0.91 |
GPQA (0-shot) | 4.70 |
MuSR (0-shot) | 10.63 |
MMLU-PRO (5-shot) | 20.29 |
- Downloads last month
- 287
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for theprint/Conversely-Mistral-7B
Base model
mistralai/Mistral-7B-v0.3
Quantized
unsloth/mistral-7b-v0.3-bnb-4bit
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard26.080
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard25.710
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.910
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.700
- acc_norm on MuSR (0-shot)Open LLM Leaderboard10.630
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard20.290