Edit model card

Llama-3-Instruct-demi-merge-8B

This is a merge of pre-trained language models created using mergekit.

This merge aimed to be a compromise between the base and instruct models, to enable future merging and/or fine-tuning by thawing out the instruct model while keeping some of its strengths.

Built with Meta Llama 3.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: meta-llama/Meta-Llama-3-8B
      layer_range: [0,32]
    - model: meta-llama/Meta-Llama-3-8B-Instruct
      layer_range: [0,32]
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B
parameters:
  t:
    - value: 0.5
dtype: bfloat16
Downloads last month
851
Safetensors
Model size
8.03B params
Tensor type
BF16
·

Merge of