metadata
base_model:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- senseable/WestLake-7B-v2
- S-miguel/The-Trinity-Coder-7B
- yam-peleg/Experiment26-7B
- InferenceIllusionist/Excalibur-7b-DPO
- Kukedlc/Jupiter-k-7B-slerp
library_name: transformers
tags:
- mergekit
- merge
Jett-w26
THIS IS THE Q8_0 GGUF QUANT of giannisan/Jett-w26 USING faispy's notebook. It should be uncensored. Prompting may be beneficial.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using yam-peleg/Experiment26-7B as a base.
Models Merged
The following models were included in the merge:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- senseable/WestLake-7B-v2
- S-miguel/The-Trinity-Coder-7B
- InferenceIllusionist/Excalibur-7b-DPO
- Kukedlc/Jupiter-k-7B-slerp
Configuration
The following YAML configuration was used to produce this model:
models:
- model: yam-peleg/Experiment26-7B
# No parameters necessary for base model
- model: Kukedlc/Jupiter-k-7B-slerp
parameters:
density: 0.58
weight: 0.25
- model: S-miguel/The-Trinity-Coder-7B
parameters:
density: 0.6
weight: 0.20
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
parameters:
density: 0.6
weight: 0.20
- model: senseable/WestLake-7B-v2
parameters:
density: 0.56
weight: 0.20
- model: InferenceIllusionist/Excalibur-7b-DPO
parameters:
density: 0.58
weight: 0.15
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16