merged_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: meta-llama/Llama-3.1-8B-Instruct
merge_method: slerp
parameters:
  t: 0.5
slices:
- sources:
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range:
    - 0
    - 8
    weight: 0.4
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range:
    - 0
    - 8
    weight: 0.6
  merge_method: task_arithmetic
- sources:
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range:
    - 9
    - 20
    weight: 0.7
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range:
    - 9
    - 20
    weight: 0.3
  merge_method: task_arithmetic
- sources:
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range:
    - 21
    - 31
    weight: 0.8
  - model: meta-llama/Llama-3.1-8B-Instruct
    layer_range:
    - 21
    - 31
    weight: 0.2
  merge_method: task_arithmetic
dtype: bfloat16
Downloads last month
5
Safetensors
Model size
7.38B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for shivash/moetest1

Finetuned
(1111)
this model