technicolor consists of the following merge, which was then merged with the below LoRAs to produce rainbow:
slices:
- sources:
- model: paulml/OGNO-7B
layer_range: [0, 32]
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
rainbow
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using technicolor as a base.
Models Merged
The following models were included in the merge:
- technicolor + jeiku/Theory_of_Mind_Mistral
- technicolor + jeiku/Gnosis_Reformatted_Mistral
- technicolor + Undi95/Mistral-7B-small_pippa_limaRP-v3-lora
- technicolor + jeiku/Theory_of_Mind_Roleplay_Mistral
Configuration
The following YAML configuration was used to produce this model:
merge_method: task_arithmetic
base_model: technicolor
parameters:
normalize: true
models:
- model: technicolor+jeiku/Theory_of_Mind_Roleplay_Mistral
parameters:
weight: 1
- model: technicolor+jeiku/Theory_of_Mind_Mistral
parameters:
weight: 1
- model: technicolor+jeiku/Gnosis_Reformatted_Mistral
parameters:
weight: 1
- model: technicolor+Undi95/Mistral-7B-small_pippa_limaRP-v3-lora
parameters:
weight: 1
dtype: float16
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jeiku/Rainbow_69_7B
Merge model
this model