Edit model card

This merge is sad, feels like a downgrade.
Mixed bag, sometimes great other times meh not bad but cursed with 'barely above a whisper'(might be my card)
Follows instructions pretty well

Llama-3-8B-Irene-v0.3

Same idea as previous merge, but with saishf's SOVL merge line.


This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: Virt-io/Llama-3-8B-Irene-v0.2
        layer_range: [0, 32]
      - model: Mergekit/Neko-Maid-SLERP
        layer_range: [0, 32]
merge_method: slerp
base_model: Virt-io/Llama-3-8B-Irene-v0.2
parameters:
  t:
    - value: [0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.35, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55]
dtype: bfloat16
slices:
  - sources:
      - model: saishf/Kitty-Cat-SOVL-8B-L3-V1
        layer_range: [0, 32]
      - model: saishf/SOVLish-Maid-L3-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: saishf/Kitty-Cat-SOVL-8B-L3-V1
parameters:
  t:
    - value: [0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.35, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55, 0.15, 0.55]
dtype: bfloat16
Downloads last month
6
Safetensors
Model size
8.03B params
Tensor type
BF16
·

Merge of