Edit model card

WestMaid_HermesMonarchv0.1

drawing

This model benchmarks quite well compared to other 7b models, and has exceptional MT-Bench and EQ-Bench v2.1 scores, ranking higher than ChatGPT-3.5-turbo and Claude-1 in both tests, and Goliath-120b, and other 70B models in the latter .

This is a merge of pre-trained language models created using mergekit

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mistral-7B-v0.1 as a base. Density was chosen deterministically between the models chosen for this merge. After testing many densities, I settled on 0.58 for each of the chosen models as it returned the highest EQ-Bench score. Not much testing was done with the weights, but I thought that I'd try gradients. Conceptually, Westlake and a Distilled version of Open Heremes are heavier in the initial layers (guiding understanding, and thoughts), before Noromaid and AlphaMonarch come in to guide its wants, reasoning, and conversation.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mistralai/Mistral-7B-v0.1
    # No parameters necessary for base model
  - model: senseable/WestLake-7B-v2
    parameters:
      density: 0.58
      weight: [0.50, 0.40, 0.25, 0.05]
  - model: NeverSleep/Noromaid-7B-0.4-DPO
    parameters:
      density: 0.58
      weight: [0.05, 0.05, 0.25, 0.40]
  - model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B
    parameters:
      density: 0.58
      weight: [0.40, 0.50, 0.25, 0.05]
  - model: mlabonne/AlphaMonarch-7B
    parameters:
      density: 0.58
      weight: [0.05, 0.05, 0.25, 0.50]
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16

Benchmark Testing

MT-Bench

image/png

EQ-Bench Leaderboard

drawing

Table of Benchmarks

Open LLM Leaderboard

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
giraffe176/WestMaid_HermesMonarchv0.1 72.62 70.22 87.42 64.31 61.99 82.16 69.6
AlphaMonarch-7B 75.99 73.04 89.18 64.4 77.91 84.69 66.72
senseable/WestLake-7B-v2 74.68 73.04 88.65 64.71 67.06 86.98 67.63
teknium/OpenHermes-2.5-Mistral-7B 61.52 64.93 84.18 63.64 52.24 78.06 26.08
NeverSleep/Noromaid-7B-0.4-DPO 59.08 62.29 84.32 63.2 42.28 76.95 25.47

Yet Another LLM Leaderboard benchmarks

Model AGIEval GPT4All TruthfulQA Bigbench Average
WestMaid_HermesMonarchv0.1 45.34 76.33 61.99 46.02 57.42

Misc. Benchmarks

MT-Bench EQ-Bench v2.1
giraffe176/WestMaid_HermesMonarchv0.1 8.021875 77.19 (3 Shot, ooba)
AlphaMonarch-7B 7.928125 76.08
senseable/WestLake-7B-v2 78.7
teknium/OpenHermes-2.5-Mistral-7B 66.89
claude-v1 7.900000 76.83
gpt-3.5-turbo 7.943750 71.74
(Paper) (Paper) Leaderboard
Downloads last month
2,637
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Merge of

Evaluation results