Edit model card

llama3

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # embed_tokens comes along with the ride with whatever is the first layer
        layer_range: [0, 1]
      - model:  D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
        layer_range: [0, 1]
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [1, 24]
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [8, 20]
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [18, 32]
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [18, 32]
merge_method: passthrough
dtype: bfloat16

Downloads last month
1,136
Safetensors
Model size
12B params
Tensor type
BF16
·