metadata
library_name: transformers
tags:
- mergekit
- merge
base_model:
- nbeerbower/gemma2-gutenberg-9B
- princeton-nlp/gemma-2-9b-it-SimPO
- jsgreenawalt/gemma-2-9B-it-advanced-v2.1
- UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp
- unsloth/gemma-2-9b-it
- lemon07r/Gemma-2-Ataraxy-v2-9B
- ifable/gemma-2-Ifable-9B
- grimjim/Gemma2-Nephilim-v3-9B
- lemon07r/Gemma-2-Ataraxy-v2a-9B
- wzhouad/gemma-2-9b-it-WPO-HB
- lemon07r/Gemma-2-Ataraxy-9B
model-index:
- name: Gemma-2-Ataraxy-Remix-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 70.83
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-Remix-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 41.59
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-Remix-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 1.28
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-Remix-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.86
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-Remix-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.72
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-Remix-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.99
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-Remix-9B
name: Open LLM Leaderboard
Gemma-2-Ataraxy-Remix-9B
Another test model. Ignore this for now. Probably wont be good but I am testing a lot of stuff.
GGUF to Try
https://huggingface.co/lemon07r/Gemma-2-Ataraxy-Remix-9B-Q8_0-GGUF
Merge Details
Merge Method
This model was merged using the Model Stock merge method using unsloth/gemma-2-9b-it as a base.
Models Merged
The following models were included in the merge:
- nbeerbower/gemma2-gutenberg-9B
- princeton-nlp/gemma-2-9b-it-SimPO
- jsgreenawalt/gemma-2-9B-it-advanced-v2.1
- UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp
- lemon07r/Gemma-2-Ataraxy-v2-9B
- ifable/gemma-2-Ifable-9B
- grimjim/Gemma2-Nephilim-v3-9B
- lemon07r/Gemma-2-Ataraxy-v2a-9B
- wzhouad/gemma-2-9b-it-WPO-HB
- lemon07r/Gemma-2-Ataraxy-9B
Configuration
The following YAML configuration was used to produce this model:
base_model: unsloth/gemma-2-9b-it
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-9B
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v2-9B
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v2a-9B
- layer_range: [0, 42]
model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
- layer_range: [0, 42]
model: ifable/gemma-2-Ifable-9B
- layer_range: [0, 42]
model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- layer_range: [0, 42]
model: princeton-nlp/gemma-2-9b-it-SimPO
- layer_range: [0, 42]
model: wzhouad/gemma-2-9b-it-WPO-HB
- layer_range: [0, 42]
model: nbeerbower/gemma2-gutenberg-9B
- layer_range: [0, 42]
model: grimjim/Gemma2-Nephilim-v3-9B
- layer_range: [0, 42]
model: recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp
- layer_range: [0, 42]
model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- layer_range: [0, 42]
model: unsloth/gemma-2-9b-it
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 29.21 |
IFEval (0-Shot) | 70.83 |
BBH (3-Shot) | 41.59 |
MATH Lvl 5 (4-Shot) | 1.28 |
GPQA (0-shot) | 11.86 |
MuSR (0-shot) | 13.72 |
MMLU-PRO (5-shot) | 35.99 |