Edit model card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Sao10K/L3-8B-Stheno-v3.2 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
- sources:
  - layer_range: [0, 16]
    model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
    parameters:
      density: 0.4
      weight: 1.0
  - layer_range: [0, 16]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.6
      weight: 0.9
- sources:
  - layer_range: [16, 32]
    model: MaziyarPanahi/Llama-3-8B-Instruct-v0.9
    parameters:
      density: 0.2
      weight: 0.8
  - layer_range: [16, 32]
    model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      density: 0.8
      weight: 1.0
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
int8_mask: true
dtype: bfloat16
Downloads last month
7
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with Alsebay/TestSMP-v0.1.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Merge of