Edit model card

giraffe176/Open_Maid_Samantha_Hermes_Orca

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • cognitivecomputations/samantha-1.1-westlake-7b
  • NeverSleep/Noromaid-7B-0.4-DPO
  • OpenHermes-2.5-Mistral-7B
  • Open-Orca/Mistral-7B-OpenOrca

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: cognitivecomputations/samantha-1.1-westlake-7b
    layer_range: [0, 32]
  - model: NeverSleep/Noromaid-7B-0.4-DPO
    layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Noromaid-7B-0.4-DPO
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
name: workspace1
---
models:
  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [0, 32]
  - model: Open-Orca/Mistral-7B-OpenOrca
    layer_range: [0, 32]
merge_method: slerp
base_model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
name: workspace2
---
models:
  - model: workspace1
    layer_range: [0, 32]
  - model: workspace2
    layer_range: [0, 32]
merge_method: slerp
base_model: workspace1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16





Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.81
AI2 Reasoning Challenge (25-Shot) 66.81
HellaSwag (10-Shot) 85.83
MMLU (5-Shot) 64.58
TruthfulQA (0-shot) 53.91
Winogrande (5-shot) 80.35
GSM8k (5-shot) 61.41
Downloads last month
2,628
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Evaluation results