File size: 1,905 Bytes
83654b2
 
4f3dd5a
 
 
 
 
 
 
 
 
 
 
 
83654b2
4f3dd5a
 
 
f702ba8
 
4f3dd5a
 
 
 
 
 
 
 
 
 
 
69d10d0
 
4f3dd5a
 
 
69d10d0
 
4f3dd5a
 
 
69d10d0
 
4f3dd5a
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
license: apache-2.0
pipeline_tag: text-generation
tags:
- safetensors
- mergekit
- merge
- mistral
- not-for-all-audiences
- nsfw
- rp
- roleplay
language:
- en
---
  # This model is recommended for RP, but you can use it as assistant as well.
  #### New model! Version 2 brings less GPTims, but it's more the same, so I made this one. This is probably the best. Please, give it a try.
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/yXSoENvYzk1rN3mSb60ym.jpeg)
Image from [Lewdiculous/EndlessRP-v3-7B-GGUF-Imatrix](https://huggingface.co/Lewdiculous/EndlessRP-v3-7B-GGUF-Imatrix).
### Prompt Format:
- **Extended Alpaca Format** As for exemple from [lemonilia/LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1).
Use *### Response: (length = huge)* for exemple, to increase length. You can use **Metharme** or **ChatML** as well, but **Alpaca** is recommended.

### Configuration

Source:

```yaml
models:
  - model: mistralai/Mistral-7B-v0.1
  - model: Elizezen/Hameln-japanese-mistral-7B
    # This model brings very good creative output...
    parameters:
      density: 0.6
      weight: 0.25
  - model: fblgit/una-cybertron-7b-v3-OMA+.\toxic-dpo-v0.1-NoWarning-lora
    # Please, refer to model page for more information. Added a finetuned Toxic DPO to remove some boring warnings. 
    parameters:
      density: 0.6
      weight: 0.25
  - model: cgato/Thespis-CurtainCall-7b-v0.1.2+Doctor-Shotgun/mistral-v0.1-7b-pippa-metharme-lora
    # A good model compartible with ST. I added a PIPPA + METHARME lora to make it more 'balanced'.
    parameters:
      density: 0.6
      weight: 0.25
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16
```
As this mostly focuses on RP and creating stories, please don't expect it being smart with riddles or logical tests.