Edit model card

BigWeave v27 95b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Chatml, Mistral, Vicuna.

Merge process

This is a self-merge of 152334H/miqu-1-70b-sf. The 30 most important layers (according to exl2 measurements) are duplicated with 50% overlap.

Merge configuration:

slices:
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [0,40]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [34,45] # dup 34-44
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [40,52]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [51,53] # dup 51-52
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [52,55]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [54,56] # dup 54-55
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [55,59]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [58,60] # dup 58-59
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [59,72]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [64,79] # dup 64-78
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [72,80]
merge_method: passthrough
dtype: float16
Downloads last month
558
Safetensors
Model size
96.4B params
Tensor type
FP16
·

Finetuned from