This is a test merge of some Gemma-7b finetunes using task_arithmatic. After testing it is confirmed to be working properly.

Merge config:

models:
  - model: gemma-7b-it-fp16
    parameters:
      weight: 1
  - model: CorticalStack_gemma-7b-ultrachat-sft
    parameters:
      weight: 1
  - model: cloudyu_google-gemma-7b-it-dpo-v1
    parameters:
      weight: 1
  - model: abideen_gemma-7b-openhermes
    parameters:
      weight: 1
merge_method: task_arithmetic
base_model: gemma-7b-base
parameters:
  normalize: true
  int8_mask: true
dtype: float16
Downloads last month
8
Safetensors
Model size
8.54B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for rombodawg/Gemma-Merge-Test-7b

Quantizations
2 models