Edit model card

WestOrcaNeural-V2-DARETIES-7B

WestOrcaNeural-V2-DARETIES-7B is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: mistralai/Mistral-7B-v0.1
    # no parameters necessary for base model
  - model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
    parameters:
      density: 0.6
      weight: 0.35
  - model: senseable/WestLake-7B-v2
    parameters:
      density: 0.65
      weight: 0.4
  - model: mlabonne/NeuralBeagle14-7B
    parameters:
      density: 0.55
      weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.53
AI2 Reasoning Challenge (25-Shot) 72.10
HellaSwag (10-Shot) 88.21
MMLU (5-Shot) 64.64
TruthfulQA (0-shot) 67.81
Winogrande (5-shot) 83.74
GSM8k (5-shot) 70.66
Downloads last month
2,599
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Evaluation results