SuperBruphin-3x7B

This is an experimental MoE model created using mergekit. (mixtral branch)

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: nbeerbower/bruphin-epsilon
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: nbeerbower/bruphin-epsilon
    positive_prompts:
        - "Tell a story."
  - source_model: FelixChao/WestSeverus-7B-DPO-v2
    positive_prompts:
        - "Solve this problem."
  - source_model: jondurbin/airoboros-m-7b-3.1.2
    positive_prompts:
        - "Write a letter."

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.75
AI2 Reasoning Challenge (25-Shot) 71.16
HellaSwag (10-Shot) 87.74
MMLU (5-Shot) 64.58
TruthfulQA (0-shot) 66.85
Winogrande (5-shot) 81.53
GSM8k (5-shot) 70.66
Downloads last month
66
Safetensors
Model size
18.5B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/SuperBruphin-3x7B

Finetuned
(1)
this model
Quantizations
1 model

Evaluation results