Edit model card

image/png

Experimental merge, attempt to gain the roleplaying capabilities of Undi95/Toppy-M-7B and SanjiWatsuki/Loyal-Macaroni-Maid-7B while maintaining the context and capabilities of the original mistralai/Mistral-7B-Instruct-v0.2

The idea was that by combining two models with one self-merge, it would be possible to make each layer more unique, and therefore make the model “smarter” than a regular self-merge.

Exl2, 6.0 bpw

10.7B Loyal Mistral Maid v0.2

slices:
  - sources:
      - model: Mistral_Instruct_SelfMerge
        layer_range: [0, 48]
      - model: Loyal_Toppy_Maid
        layer_range: [0, 48]
merge_method: slerp
base_model: Mistral_Instruct_SelfMerge
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16

Loyal Toppy Maid

slices:
  - sources:
    - model: Undi95/Toppy-M-7B
      layer_range: [0, 24]
  - sources:
    - model: SanjiWatsuki/Loyal-Macaroni-Maid-7B
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

Mistral_Instruct_SelfMerge

slices:
  - sources:
    - model: mistralai/Mistral-7B-Instruct-v0.2
      layer_range: [0, 24]
  - sources:
    - model: mistralai/Mistral-7B-Instruct-v0.2
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
Downloads last month
0
Safetensors
Model size
10.7B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with xxx777xxxASD/10.7B-Loyal-Mistral-Maid-32k-v0.2-A.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Collection including xxx777xxxASD/10.7B-Loyal-Mistral-Maid-32k-v0.2-A