Edit model card

BigWeave v31 96b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

llamav3

Merge process

This is a self-merge of meta-llama/Meta-Llama-3-70B-Instruct. Middle layers are duplicated and various matrices are scaled according to the template by jukofyork as shown here: https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2079950009

Merge configuration:

const_tag: &MODEL meta-llama/Meta-Llama-3-70B-Instruct

const_tag: &RESIDUAL_SCALE_FACTOR 0.5
const_tag: &QK_ATTENUATION_FACTOR 0.7071067812
const_tag: &OUT_FACTOR 0.9

scale-filter-env: &scale_filter_env
  parameters:
    scale:
      - filter: o_proj
        value: *RESIDUAL_SCALE_FACTOR
      - filter: down_proj
        value: *RESIDUAL_SCALE_FACTOR
      - filter: q_proj
        value: *QK_ATTENUATION_FACTOR
      - filter: k_proj
        value: *QK_ATTENUATION_FACTOR
      - filter: v_proj
        value: *OUT_FACTOR
      - filter: up_proj
        value: *OUT_FACTOR
      - value: 1.0

slices:
  - sources:
    - model: *MODEL
      layer_range: [0, 25]

  - sources:
    - model: *MODEL
      layer_range: [25, 26]
      <<: *scale_filter_env

  - sources:
    - model: *MODEL
      layer_range: [25, 27]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [26, 28]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [27, 29]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [28, 30]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [29, 31]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [30, 32]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [31, 33]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [32, 34]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [33, 35]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [34, 36]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [35, 37]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [36, 38]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [37, 39]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [38, 40]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [39, 41]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [40, 42]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [41, 43]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [42, 44]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [43, 45]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [44, 46]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [45, 47]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [46, 48]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [47, 49]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [48, 50]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [49, 51]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [50, 52]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [51, 53]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [52, 54]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [53, 55]
      <<: *scale_filter_env

  - sources:
    - model: *MODEL
      layer_range: [54, 55]
      <<: *scale_filter_env
  - sources:
    - model: *MODEL
      layer_range: [55, 80]

merge_method: passthrough
dtype: float16
Downloads last month
1
Safetensors
Model size
96.2B params
Tensor type
FP16
·

Finetuned from