File size: 3,230 Bytes
f609caf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62f9cd4
 
359b665
62f9cd4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2606949
 
62f9cd4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Writer-Large-2411-v2.1

EXL2-Quant of [gghfez/Writer-Large-2411-v2.1](https://huggingface.co/gghfez/Writer-Large-2411-v2.1)

Creative-Writing Control-Vectors available here: [gghfez/Writer-Large-2411-v2.1-control-vectors](https://huggingface.co/gghfez/Writer-Large-2411-v2.1-control-vectors)

## Overview

This model is built on Mistral-Large-Instruct-2411 and optimized for creative writing purposes. The base model excels at following instructions and handling details in long context when using the [new prompt template](https://huggingface.co/gghfez/Mistral-Large-Instruct-2411/blob/main/tokenizer_config.json#L6177).

### Key Improvements
- Reduced positivity bias
- Reduced AI tropes and repetitive language patterns in story generation
- Enhanced performance with longer context stories (multiple chapters) and roleplay sessions
- Improved steering capabilities for roleplay via [OOC] instructions
- Better handling of "group chat" scenarios



<img src="https://files.catbox.moe/hisiua.png" width="400"/>

## Usage

### Prompt Template
**The model requires a system prompt in the Mistral-V7 format.**
If you omit [`SYSTEM_PROMPT] [/SYSTEM_PROMPT]`, the model:
- May not follow instructions properly at short contexts
- Can become repetitive at longer contexts

Example:
```python
[SYSTEM_PROMPT]You are an award winning writer. Assist the user.[/SYSTEM_PROMPT][INST] Write the opening chapter of ... [/INST]
```

### SillyTavern Integration
Story String:
```python
[SYSTEM_PROMPT] {{#if system}}{{system}}[/SYSTEM_PROMPT] [INST]
{{/if}}{{#if wiBefore}}{{wiBefore}}
{{/if}}{{#if description}}{{description}}
{{/if}}{{#if personality}}{{personality}}
{{/if}}{{#if scenario}}{{scenario}}
{{/if}}{{#if wiAfter}}{{wiAfter}}
{{/if}}{{#if persona}}{{persona}}
{{/if}}{{trim}}[/INST] Understood.</s>
```

For response steering, use `[OOC]` commands, e.g.:
- `[OOC] Have them interrupted by a loud explosion in a nearby factory`
- `[OOC] Have her refuse to sell it and suggest another merchant instead`

## Technical Details

### Training
- QLoRA training at 32768 context
- Merged with [gghfez/Mistral-Large-Instruct-2411](https://huggingface.co/gghfez/Mistral-Large-Instruct-2411) at bf16
- [jukofyork/Creative writing control vectors](https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0) were applied during synthetic dataset generation
- Includes standard assistant instruct data for long-context stability
- Note: Performance on code tasks may be reduced compared to base model
- Note: No attempt was made to remove 'Name-Slop', so you'll still encounter Lily and Elara if you don't specify character names

### Context Length
- Base model: 131,072 tokens
- Training range: 1024-32728 tokens
- Training context window: 32768 tokens

## Testing Environments
Tested with exllamav2 4.5bpw on:
- [tabbyAPI](https://github.com/theroyallab/tabbyAPI) + [MikuPad](https://github.com/lmg-anon/mikupad)
- [tabbyAPI](https://github.com/theroyallab/tabbyAPI) + [SillyTavern](https://github.com/SillyTavern/SillyTavern)
- [exui](https://github.com/turboderp/exui)