mav23's picture
Upload folder using huggingface_hub
b66ee81 verified
|
raw
history blame
2.61 kB
metadata
base_model:
  - sam-paech/Darkest-muse-v1
  - lemon07r/Gemma-2-Ataraxy-v4c-9B
library_name: transformers
tags:
  - mergekit
  - merge
license: gemma
language:
  - en

Gemma-2-Ataraxy-v4d-9B

For all intents and purposes, we can consider this the best "all rounder" of the Ataraxy. While primarily made with creative writing in mind, this one has done well in testing, and was made based on a lot of what I've discovered through trial, experimentation, testing and feedback from others.

Is this the best Ataraxy model? Not sure. I made a lot of variations, and quite honestly most of them aren't great, or at least as good as the very first version. The v2 series could do well in writing tests, but was a little too over the top and sloppy. The v3 series was a return to roots, and is a lot closer to v1, and can be considered v1 but slightly better or different and where we start to see some improvements in some areas. v4 is where we see further improvements, especially in overall or general use, even though my primary goal was writing ability. People seem to really like the very first version of Ataraxy, even if it doesn't do as well in various benchmarks. I hope this one comes close to beating it's predecessor, but if it doesn't I will keep trying.

All the Ataraxy series are primarily made for writing ability, but after some threshold, it started to get hard to tell, and even test for writing performance because they were all pretty good. Hopefully with some feedback we can continue to seek improvements.

Quants

Provided by @mradermacher

GGUF Static: https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4d-9B-GGUF

GGUF IMatrix: https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4d-9B-i1-GGUF

Leaderboards

Open LLM Leaderboard 2 (12B and under)

Open LLM 2

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B
dtype: bfloat16
merge_method: slerp
parameters:
  t: 0.25
slices:
- sources:
  - layer_range: [0, 42]
    model: lemon07r/Gemma-2-Ataraxy-v4c-9B
  - layer_range: [0, 42]
    model: sam-paech/Darkest-muse-v1