Edit model card

RandomMergeNoNormWEIGHTED-7B-DARETIES

RandomMergeNoNormWEIGHTED-7B-DARETIES is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: FelixChao/WestSeverus-7B-DPO-v2
    # No parameters necessary for base model
  - model: FelixChao/WestSeverus-7B-DPO-v2
    parameters:
      density: [1, 0.7, 0.1]
      weight: [0, 0.3, 0.7, 1]
  - model: CultriX/Wernicke-7B-v9
    parameters:
      density: [1, 0.7, 0.3]
      weight: [0, 0.25, 0.5, 1]
  - model: mlabonne/NeuralBeagle14-7B
    parameters:
      density: 0.25
      weight:
        - filter: mlp
          value: 0.5
        - value: 0
merge_method: ties
base_model: FelixChao/WestSeverus-7B-DPO-v2
parameters:
  int8_mask: true
  normalize: true
  sparsify:
    - filter: mlp
      value: 0.5
    - filter: self_attn
      value: 0.5
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.36
AI2 Reasoning Challenge (25-Shot) 73.38
HellaSwag (10-Shot) 88.50
MMLU (5-Shot) 64.94
TruthfulQA (0-shot) 71.50
Winogrande (5-shot) 83.58
GSM8k (5-shot) 70.28
Downloads last month
2,627
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Collection including jsfs11/RandomMergeNoNormWEIGHTED-7B-DARETIES

Evaluation results