Edit model card

DesivoMerge0.1

DesivoMerge0.1 is a merge of a bunch of models using mergekit

The idea is to continuously merge models into a main model. The first merge is between open-orca-mistral-7B and open-hermes-7B, then I merged the resulting merge with the best performing 7B model on the open-llm leaderboard (TurdusBeagle-7B).

I will keep adding models to the merge until the average score of the models in the merge is lower than the score of the previous merge, in which case I will backtrack and find another model to merge.

I will try to avoid contaminated models by looking into each of the candidates before merging them.

🧩 Configuration

slices:
  - sources:
      - model: ./merge
        layer_range: [0, 32]
      - model: Azazelle/Argetsu
        layer_range: [0, 32]
merge_method: slerp
base_model: ./merge
tokenizer_source: base
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
2,436
Safetensors
Model size
7.24B params
Tensor type
BF16
·