File size: 2,527 Bytes
447f024
 
 
 
 
 
 
 
 
 
 
 
 
f809406
c833e87
d9f0a95
447f024
1597203
 
 
 
 
 
447f024
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
tags:
- merge
- mergekit
- lazymergekit
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
- KoboldAI/Mistral-7B-Holodeck-1
base_model:
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
- KoboldAI/Mistral-7B-Holodeck-1
---

# HoloViolet-7B-test5
The best version of HoloViolet. At this point it seems outclassed by twizzler, but I still love it for its proactive writing and sometimes unexpected outputs.

Update: quants available over [here](https://huggingface.co/mradermacher/HoloViolet-7B-GGUF), kudos to mradermacher.

A very discriptive model, harnessing the literary benefits of KoboldAI's Mistral Holodeck, but less schizo.
Manages to get an understanding of the situation, doesn't ignore context nearly as much, while expanding on it creatively.
It's not very subtle about telling you a character's intentions, as it is still a 7B, but it writes well imo.
GreenNode V1olet is a great model for supplying smarts since it doesn't gravitate towards GPT'isms nearly as much as the other smart mistral tunes.
Use Roleplay prompt preset on sillytavern, I find simple prompts work better with these smaller models.

HoloViolet-7B-test5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet)
* [KoboldAI/Mistral-7B-Holodeck-1](https://huggingface.co/KoboldAI/Mistral-7B-Holodeck-1)

## 🧩 Configuration

```yaml
slices:
  - sources:
      - model: GreenNode/GreenNode-mini-7B-multilingual-v1olet
        layer_range: [0, 32]
      - model: KoboldAI/Mistral-7B-Holodeck-1
        layer_range: [0, 32]
merge_method: slerp
base_model: GreenNode/GreenNode-mini-7B-multilingual-v1olet
parameters:
  t:
    - value: 0.32
dtype: bfloat16
```

## 💻 Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "son-of-man/HoloViolet-7B-test5"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```