--- base_model: - Riiid/sheep-duck-llama-2-13b - IkariDev/Athena-v4 - TheBloke/Llama-2-13B-fp16 - KoboldAI/LLaMA2-13B-Psyfighter2 - KoboldAI/LLaMA2-13B-Erebus-v3 - Henk717/echidna-tiefigther-25 - Undi95/Unholy-v2-13B - ddh0/EstopianOrcaMaid-13b tags: - mergekit - merge - not-for-all-audiences - ERP - RP - Roleplay - uncensored license: llama2 language: - en --- # Model This is the Bf16 unquantized version of SnowyRP And the First Public Release of a Model in the SnowyRP series of models! [BF16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B) [GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ) [GGUF](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GGUF) Any Future Quantizations I am made aware of will be added. ## Merge Details just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure. These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse. This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more. ## Model Use: Here is an modified Alpaca prompt I use with this model: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Take the role of {{char}} in a play that leaves a lasting impression on {{user}}. Write {{char}}'s next reply. Never skip or gloss over {{char}}’s actions. Progress the scene at a naturally slow pace. Avoid repetition, don't loop. Develop the plot slowly, always stay in character. Describe all actions in full, elaborate, explicit, graphic, and vivid detail. Mention all relevant sensory perceptions. ``` I also use an author Note to make the model act best: ``` [IMPORTANT NOTES: things must be kept in 3rd person, finally make ALL replies as lengthy and detailed as possible by using these strategies: describe sights, sounds, sensations, and the thoughts and feelings of the characters in detail, as well as replying with at least 2 or more paragraphs.] ``` This model is very good... WITH THE RIGHT SETTINGS. I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off. ``` Optimal Settings (so far) Settings for Sillytavern in files. Microstat Mode: 2 tau: 2.95 eta: 0.05 Dynamic Temp min: 0.25 max: 1.8 Cut offs epsilon: 3 eta: 3 ``` ### Merge Method This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base. ### Models Merged The following models were included in the merge: * [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b) * [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4) * [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) * [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) * [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25) * [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B) * [EstopianOrcaMaid](https://huggingface.co/ddh0/EstopianOrcaMaid-13b) ### Configuration The following YAML configuration was used to produce this model: for P1 ```yaml base_model: model: path: TheBloke/Llama-2-13B-fp16 dtype: bfloat16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 40] model: model: path: TheBloke/Llama-2-13B-fp16 - layer_range: [0, 40] model: model: path: Undi95/Unholy-v2-13B parameters: weight: 1.0 - layer_range: [0, 40] model: model: path: Henk717/echidna-tiefigther-25 parameters: weight: 0.45 - layer_range: [0, 40] model: model: path: KoboldAI/LLaMA2-13B-Erebus-v3 parameters: weight: 0.33 ``` for P2 ```yaml base_model: model: path: TheBloke/Llama-2-13B-fp16 dtype: bfloat16 merge_method: task_arithmetic slices: - sources: - layer_range: [0, 40] model: model: path: TheBloke/Llama-2-13B-fp16 - layer_range: [0, 40] model: model: path: KoboldAI/LLaMA2-13B-Psyfighter2 parameters: weight: 1.0 - layer_range: [0, 40] model: model: path: Riiid/sheep-duck-llama-2-13b parameters: weight: 0.45 - layer_range: [0, 40] model: model: path: IkariDev/Athena-v4 parameters: weight: 0.33 ``` for the final merge ```yaml base_model: model: path: TheBloke/Llama-2-13B-fp16 dtype: bfloat16 merge_method: ties parameters: int8_mask: 1.0 normalize: 1.0 slices: - sources: - layer_range: [0, 40] model: model: path: ddh0/EstopianOrcaMaid-13b parameters: density: [1.0, 0.7, 0.1] weight: 1.0 - layer_range: [0, 40] model: model: path: Masterjp123/snowyrpp1 parameters: density: 0.5 weight: [0.0, 0.3, 0.7, 1.0] - layer_range: [0, 40] model: model: path: Masterjp123/snowyrpp2 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0.0 - layer_range: [0, 40] model: model: path: TheBloke/Llama-2-13B-fp16 ```