Edit model card

Llama-3-experimental-merge-trial1-8B

Built with Meta Llama 3.

This is an experimental merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: Nitral-AI/Poppy_Porpoise-v0.2-L3-8B
      layer_range: [0,32]
    - model: Sao10K/L3-Solana-8B-v1
      layer_range: [0,32]
merge_method: slerp
base_model: Nitral-AI/Poppy_Porpoise-v0.2-L3-8B
parameters:
  t:
    - value: 0.5
dtype: bfloat16
Downloads last month
3
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Merge of