library_name: transformers
tags:
- mergekit
- merge
Tips
SillyTavern presets in presets folder.
Model has formatting issues when using asterisks. Recommended to use novel like formatting (only use quotes)
System prompt can be improved, help welcomed.
Model seems to take characters too seriously, if you find it too stubborn regenerate reply or edit it. It should comply after.
Deris-SSS
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
- Deris-v2
- SmartyPants-step2
Configuration
The following YAML configuration was used to produce this model:
Deris-SSS
Final merge combine the smart models with the unhinged ones
slices:
- sources:
- model: ./Mergekit/Deris-v2
layer_range: [0, 32]
- model: ./Mergekit/SmartyPants-step2
layer_range: [0, 32]
merge_method: slerp
base_model: ./Mergekit/Deris-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
Deris-v2
Combine a bit of Datura_7B into Eris_Floramix_DPO_7B
Reason Datura is extremrly unhinged :) more so than Eris
slices:
- sources:
- model: ChaoticNeutrals/Eris_Floramix_DPO_7B
layer_range: [0, 32]
- model: ResplendentAI/Datura_7B
layer_range: [0, 32]
merge_method: slerp
base_model: ChaoticNeutrals/Eris_Floramix_DPO_7B
parameters:
t:
- filter: self_attn
value: [0, 0.20, 0.15, 0.25, 0.35]
- filter: mlp
value: [0.35, 0.20, 0.25, 0.15, 0]
- value: 0.20
dtype: float16
SmartyPants-step1
Combine OMJ into Einstein Reason Einstein looks interesting and OMJ was a high ranking model
slices:
- sources:
- model: Weyaxi/Einstein-v4-7B
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: Weyaxi/Einstein-v4-7B
parameters:
t:
- filter: self_attn
value: [0, 0.45, 0.3, 0.55, 0.65]
- filter: mlp
value: [0.65, 0.45, 0.55, 0.3, 0]
- value: 0.45
dtype: float16
SmartyPants-step2
Combine Smarty pants into FuseChat-VaRM Reason IDK I just like FuseChat-VaRM
slices:
- sources:
- model: FuseAI/FuseChat-7B-VaRM
layer_range: [0, 32]
- model: ./Mergekit/SmartyPants-step1
layer_range: [0, 32]
merge_method: slerp
base_model: FuseAI/FuseChat-7B-VaRM
parameters:
t:
- filter: self_attn
value: [0, 0.45, 0.3, 0.55, 0.65]
- filter: mlp
value: [0.65, 0.45, 0.55, 0.3, 0]
- value: 0.45
dtype: float16