[HIGHLY EXPERIMENTAL]
Just try it for a good laugh. Need testing.
The plan :
Open-Orca/OpenOrcaxOpenChat-Preview2-13B
PygmalionAI/pygmalion-2-13b
Undi95/MLewd-L2-13B-v2-3
jondurbin/spicyboros-13b-2.2
lemonilia/limarp-llama2-v2
Step 1: Merge OpenOrcaxOpenChat-Preview2-13B with pygmalion-2-13b
=> OpenOrcaPyg2
Step 2: Merge MLewd with Spicyboros
=> MLewdBorosPlus
Step 3: In the layer side, replace the layer 0 to 8 with MLewd, and the layer 16 to 20 with Spicyboros of the first merge
=> OpenOrcaPyg2-Layered
Step 4: In the layer side, replace the layer 0 to 8 with MLewd, and the layer 16 to 20 with Spicyboros of the second merge
=> MLewdBorosPlus-Layered
Step 5: Merge OpenOrcaPyg2-Layered with MLewdBorosPlus-Layered
=> OpenRPBase
Step 6: Apply Limarp2 at 0.5 weight at the end
=> OpenRP
Goal: making Orca a RP model with Pyg2 dataset and MLewd+Spicyboros 100% layer accross the merge and avoid censoring
It will be diluted to ~25% in other layer, SLERP do the dirty job
The LoRA is here to redirect to RP writing
Don't ask me why this model work. I'm a blind scientist. It seems a little obsessed with the game "Garry's mod" tho. Be patient with him. SuperCOT applied : https://huggingface.co/Undi95/OpenRP-13B-SuperCOT-GGUF
- Downloads last month
- 7