Bepis
A new 9B model from jeiku. This one is smart, proficient at markdown, knows when to stop talking, and is quite soulful. The merge was an equal 3 way split between https://huggingface.co/ChaoticNeutrals/Prodigy_7B, https://huggingface.co/Test157t/Prima-LelantaclesV6-7b, and https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.1
If there's any 7B to 11B merge or finetune you'd like to see, feel free to leave a message.
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: primathespis
layer_range: [0, 20]
- sources:
- model: prodigalthespis
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 62.40 |
AI2 Reasoning Challenge (25-Shot) | 62.54 |
HellaSwag (10-Shot) | 80.12 |
MMLU (5-Shot) | 62.84 |
TruthfulQA (0-shot) | 53.30 |
Winogrande (5-shot) | 76.48 |
GSM8k (5-shot) | 39.12 |
- Downloads last month
- 81
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ChaoticNeutrals/Bepis_9B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard62.540
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard80.120
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard62.840
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard53.300
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard76.480
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard39.120