Edit model card

This is meant for further finetuning, it is iffy as-is. Made using a new structure I call "ripple merge" that works backwards and forwards through the model.

Other frankenmerge methods were failing at sizes over 11b.


Llama-3-15b-Instruct

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [0, 15]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [14, 15]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [13, 14]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [12, 13]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [11, 12]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [10, 11]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [9, 10]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [8, 23]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [21, 22]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [20, 21]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [19, 20]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [18, 19]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [17, 18]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [16, 17]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [15, 16]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [14, 15]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [13, 14]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [12, 13]
  - sources:
      - model: NousResearch/Meta-Llama-3-8B-Instruct
        layer_range: [12, 32]

merge_method: passthrough
dtype: float16



Downloads last month
9
Safetensors
Model size
15.4B params
Tensor type
FP16
·

Finetuned from