gemma-7b-slerp / README.md
Mark-Arcee's picture
Update README.md
cf123b5 verified
metadata
library_name: transformers
license: apache-2.0
base_model:
  - google/gemma-7b
merge-model:
  - google/gemma-7b-it
tags:
  - merge
  - mergekit
  - google/gemma-7b-it
  - google/gemma-7b

image/webp

Gemma-7B-slerp

This model is a merge of Gemma 7b base and 7b-instruct, using the Slerp merging method.

Test-7B-slerp is a merge of the following models using mergekit:

πŸ† Evaluation

Nous

Gemma-7B-slerp's Nous' benchmark suite (evaluation performed using LLM AutoEval).

Model Average AGIEval GPT4All TruthfulQA Bigbench
arcee-ai/Gemma-7B-slerp πŸ“„ 34.14 23.86 36.55 46.22 29.94

🧩 Configuration

Slerp YAML Config

slices:
  - sources:
      - model: google/gemma-7b-it
        layer_range: [0, 28]
      - model: google/gemma-7b
        layer_range: [0, 28]
merge_method: slerp
base_model: google/gemma-7b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16