Edit model card

Gemma Advanced V2.1

This is a merge of the 'smartest' advanced fine-tunes available for Gemma-2-9b-it. It includes WPO, SimPO, and SPPO. The merge was performed via the SOTA 'della' merge method. Merge parameters have been hand-tuned for best results. The Q8_0 quant is highly recommended until better quants come along.

Notes and observations:

  • The extreme temperature sensitivity from V1 has been fixed, no longer needs to be run at lower temperatures
  • Has a somewhat different writing style than any of the parent models
  • Great instruction following
  • Tracks plot details well and has good situational understanding
  • Seems to have a good understanding of psychology, emotions and creative writing
  • More 'sane' than base gemma-it, SPPO, or SimPO - not as prone to 'Cruella De Vil' or 'Evil Sorceress' like SPPO or SimPO, when portraying characters
  • Would likely serve as a good base for further merges
  • I'm looking for a job, if you're hiring. I'm a skilled Python developer who brings strong devops skills along with an ever-growing knowledge of machine learning pipelines and models. Message me if you want to talk about what I can bring to your team.
  • Overall, this feels like a very useful and successful merge.

Quantized GGUFs can be found here:

Thanks to everyone who was kind enough to provide quants!

I'll link to other quants as they appear.

sample ollama Modelfile

FROM /path/to/file/gemma-2-9B-it-advanced-v2.1-Q8_0.gguf
PARAMETER stop "<start_of_turn>"
PARAMETER stop "<end_of_turn>"
PARAMETER num_ctx 8192
TEMPLATE """<start_of_turn>user
{{ if .System }}{{ .System }} {{ end }}{{ .Prompt }}<end_of_turn>
<start_of_turn>model
{{ .Response }}<end_of_turn>"""

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the della merge method using google/gemma-2-9b-it as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: google/gemma-2-9b-it 
  - model: wzhouad/gemma-2-9b-it-WPO-HB
    parameters:
      density: 0.55
      weight: 0.6
  - model: princeton-nlp/gemma-2-9b-it-SimPO 
    parameters:
      density: 0.35
      weight: 0.6
  - model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
    parameters:
      density: 0.25
      weight: 0.4
merge_method: della
base_model: google/gemma-2-9b-it
parameters:
  normalize: true
  int8_mask: true
  lambda: 1.0
  epsilon: 0.1
dtype: float16
Downloads last month
108
Safetensors
Model size
10.2B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jsgreenawalt/gemma-2-9B-it-advanced-v2.1