Gemma-Ko-Merge / README.md
Gunulhona's picture
Upload folder using huggingface_hub
27fb0fd verified
|
raw
history blame
1.86 kB
---
base_model:
- lemon07r/Gemma-2-Ataraxy-9B
- wzhouad/gemma-2-9b-it-WPO-HB
- rtzr/ko-gemma-2-9b-it
- ghost613/gemma9_on_korean_summary_events
- rtzr/ko-gemma-2-9b-it
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the breadcrumbs_ties merge method using [lemon07r/Gemma-2-Ataraxy-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-9B) as a base.
### Models Merged
The following models were included in the merge:
* [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB)
* [rtzr/ko-gemma-2-9b-it](https://huggingface.co/rtzr/ko-gemma-2-9b-it) + [ghost613/gemma9_on_korean_summary_events](https://huggingface.co/ghost613/gemma9_on_korean_summary_events)
* [rtzr/ko-gemma-2-9b-it](https://huggingface.co/rtzr/ko-gemma-2-9b-it)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: lemon07r/Gemma-2-Ataraxy-9B
layer_range: [0, 42]
parameters:
weight: 1
density: 0.7
gamma: 0.03
- model: wzhouad/gemma-2-9b-it-WPO-HB
layer_range: [0, 42]
parameters:
weight: 1
density: 0.42
gamma: 0.03
- model: rtzr/ko-gemma-2-9b-it
layer_range: [0, 42]
parameters:
weight: 1
density: 0.42
gamma: 0.03
- model: rtzr/ko-gemma-2-9b-it+ghost613/gemma9_on_korean_summary_events # lora model loading
layer_range: [0, 42]
parameters:
weight: 1
density: 0.42
gamma: 0.03
merge_method: breadcrumbs_ties
base_model: lemon07r/Gemma-2-Ataraxy-9B
dtype: bfloat16
```