|
--- |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
## Description |
|
|
|
This repo contains bf16 files of Nyxene-v1-11B. Same as the [previous version](https://huggingface.co/beberik/Nyxene-11B) but I used newer models and tried to repeat what I experimented with when there were older models. |
|
|
|
## Model used |
|
- [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) |
|
- [openaccess-ai-collective/DPOpenHermes-7B](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B) |
|
- [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) |
|
- [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7) |
|
- [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1) |
|
|
|
I added a new model because after the same action but using zephyr and dolphin the model turned out to be more creative. |
|
|
|
## Prompt template |
|
|
|
The best one after further testing is this one: |
|
|
|
``` |
|
<|system|> |
|
Below is an instruction that describes a task. Write a response that appropriately completes the request. |
|
<|user|> |
|
{prompt} |
|
<|assistant|> |
|
``` |
|
|
|
## The secret sauce |
|
|
|
loyal-piano with 1% of notus : |
|
``` |
|
slices: |
|
- sources: |
|
- model: chargoddard/loyal-piano-m7 |
|
layer_range: [0, 48] |
|
- model: argilla/notus-7b-v1 |
|
layer_range: [0, 48] |
|
merge_method: slerp |
|
base_model: argilla/notus-7b-v1 |
|
parameters: |
|
t: |
|
- filter: lm_head |
|
value: [0.75] |
|
- filter: embed_tokens |
|
value: [0.75] |
|
- filter: self_attn |
|
value: [0.75, 0.25] |
|
- filter: mlp |
|
value: [0.25, 0.75] |
|
- filter: layernorm |
|
value: [0.5, 0.5] |
|
- filter: modelnorm |
|
value: [0.75] |
|
- value: 0.99 # fallback for rest of tensors |
|
dtype: bfloat16 |
|
``` |
|
|
|
loyal-piano-m7-juanako-11B : |
|
``` |
|
slices: |
|
- sources: |
|
- model: fblgit/juanako-7b-UNA |
|
layer_range: [0, 24] |
|
- sources: |
|
- model: ehartford/dolphin-2.1-mistral-7b |
|
layer_range: [8, 32] |
|
merge_method: passthrough |
|
dtype: bfloat16 |
|
``` |
|
|
|
Starling-DPOHermes-11B : |
|
``` |
|
slices: |
|
- sources: |
|
- model: berkeley-nest/Starling-LM-7B-alpha |
|
layer_range: [0, 24] |
|
- sources: |
|
- model: mlabonne/NeuralHermes-2.5-Mistral-7B |
|
layer_range: [8, 32] |
|
merge_method: passthrough |
|
dtype: bfloat16 |
|
``` |
|
|
|
Nyxene-11B : |
|
``` |
|
slices: |
|
- sources: |
|
- model: dolphin-juanako-11B |
|
layer_range: [0, 48] |
|
- model: Starling-NeuralHermes-11B |
|
layer_range: [0, 48] |
|
merge_method: slerp |
|
base_model: dolphin-juanako-11B |
|
parameters: |
|
t: |
|
- filter: lm_head |
|
value: [0.75] |
|
- filter: embed_tokens |
|
value: [0.75] |
|
- filter: self_attn |
|
value: [0.75, 0.25] |
|
- filter: mlp |
|
value: [0.25, 0.75] |
|
- filter: layernorm |
|
value: [0.5, 0.5] |
|
- filter: modelnorm |
|
value: [0.75] |
|
- value: 0.5 # fallback for rest of tensors |
|
dtype: bfloat16 |
|
``` |
|
I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here. |
|
|