Mistral-RP-0.1-7B / README.md
Undi95's picture
Update README.md
6adcfec
metadata
license: cc-by-nc-4.0
tags:
  - not-for-all-audiences
  - nsfw

Description

This repo contains fp16 files of Mistral-RP-0.1-7B.

Some exemple of output

Here is the recipe:

slices:
  - sources:
      - model: migtissera/Synthia-7B-v1.3
        layer_range: [0, 32]
      - model: Undi95/Mistral-small_pippa_limaRP-v3-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: migtissera/Synthia-7B-v1.3
parameters:
  t:
    - filter: lm_head
      value: [0.75]
    - filter: embed_tokens
      value: [0.75]
    - filter: self_attn
      value: [0.75, 0.25]
    - filter: mlp
      value: [0.25, 0.75]
    - filter: layernorm
      value: [0.5, 0.5]
    - filter: modelnorm
      value: [0.75]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Tool used : https://github.com/cg123/mergekit/tree/yaml

Model and lora used

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

LimaRP v3 usage and suggested settings

image/png

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

image/png

If you want to support me, you can here.