--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- ## Description This repo contains fp16 files of Mistral-RP-0.1-7B. [Some exemple of output](https://files.catbox.moe/mdkebx.png) Here is the recipe: ```shell slices: - sources: - model: migtissera/Synthia-7B-v1.3 layer_range: [0, 32] - model: Undi95/Mistral-small_pippa_limaRP-v3-7B layer_range: [0, 32] merge_method: slerp base_model: migtissera/Synthia-7B-v1.3 parameters: t: - filter: lm_head value: [0.75] - filter: embed_tokens value: [0.75] - filter: self_attn value: [0.75, 0.25] - filter: mlp value: [0.25, 0.75] - filter: layernorm value: [0.5, 0.5] - filter: modelnorm value: [0.75] - value: 0.5 # fallback for rest of tensors dtype: float16 ``` Tool used : https://github.com/cg123/mergekit/tree/yaml ## Model and lora used - [Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora) - [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - [Synthia-7B-v1.3](https://huggingface.co/migtissera/Synthia-7B-v1.3) ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) If you want to support me, you can [here](https://ko-fi.com/undiai).