GGUF
Not-For-All-Audiences
nsfw
Inference Endpoints

Description

This repo contains quantized files of Mistral-RP-0.1-7B.

Some exemple of output

Here is the recipe:

slices:
  - sources:
      - model: migtissera/Synthia-7B-v1.3
        layer_range: [0, 32]
      - model: Undi95/Mistral-small_pippa_limaRP-v3-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: migtissera/Synthia-7B-v1.3
parameters:
  t:
    - filter: lm_head
      value: [0.75]
    - filter: embed_tokens
      value: [0.75]
    - filter: self_attn
      value: [0.75, 0.25]
    - filter: mlp
      value: [0.25, 0.75]
    - filter: layernorm
      value: [0.5, 0.5]
    - filter: modelnorm
      value: [0.75]
    - value: 0.5 # fallback for rest of tensors
dtype: float16

Tool used : https://github.com/cg123/mergekit/tree/yaml

Model and lora used

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

LimaRP v3 usage and suggested settings

image/png

You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length:

image/png

If you want to support me, you can here.

Downloads last month
14
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .