Edit model card

Mistral_2X7b

Marcoro14-7B-slerp is a merge of the following models using mergekit:

🧩 Configuration

slices:
  - sources:
      - model: mistralai/Mistral-7B-Instruct-v0.2
        layer_range: [0, 32]
      - model: mistralai/Mistral-7B-v0.1
        layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
618
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with tourist800/mistral_2X7b.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.