Uploaded model
- Developed by: theprint
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 14.72 |
IFEval (0-Shot) | 26.08 |
BBH (3-Shot) | 25.71 |
MATH Lvl 5 (4-Shot) | 0.91 |
GPQA (0-shot) | 4.70 |
MuSR (0-shot) | 10.63 |
MMLU-PRO (5-shot) | 20.29 |
- Downloads last month
- 297
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for theprint/Conversely-Mistral-7B
Base model
mistralai/Mistral-7B-v0.3
Quantized
unsloth/mistral-7b-v0.3-bnb-4bit
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard26.080
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard25.710
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.910
- acc_norm on GPQA (0-shot)Open LLM Leaderboard4.700
- acc_norm on MuSR (0-shot)Open LLM Leaderboard10.630
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard20.290