Lewdiculous commited on
Commit
dd09949
1 Parent(s): 5bdcb66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md CHANGED
@@ -1,3 +1,103 @@
1
  ---
2
  license: cc-by-4.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
+ inference: false
4
+ tags:
5
+ - gguf
6
+ - mistral
7
+ - roleplay
8
  ---
9
+ This repository hosts GGUF-IQ-Imatrix quants for [ResplendentAI/Persephone_7B](https://huggingface.co/ResplendentAI/Persephone_7B).
10
+
11
+ *The return of a cult classic.*
12
+
13
+ Quants:
14
+ ```python
15
+ quantization_options = [
16
+ "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
17
+ "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
18
+ ]
19
+ ```
20
+
21
+ **What does "Imatrix" mean?**
22
+
23
+ It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
24
+ The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
25
+ The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
26
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
27
+
28
+ For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.
29
+
30
+ **Steps:**
31
+
32
+ ```
33
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
34
+ ```
35
+ *Using the latest llama.cpp at the time.*
36
+
37
+ # Original model information:
38
+
39
+ Officially rebranding Thespis to **TheSpice**. Why? Cause its a cooler, simpler name.
40
+ I've focused on making the model more flexible and provide a more unique experience.
41
+ I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach.
42
+ This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.
43
+
44
+
45
+ ## Datasets Used
46
+
47
+ * Dolphin
48
+ * Ultrachat
49
+ * Capybara
50
+ * Augmental
51
+ * ToxicQA
52
+ * Yahoo Answers
53
+ * Airoboros 3.1
54
+
55
+ ## Features
56
+
57
+ Narration
58
+
59
+ If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward.
60
+
61
+ # You can look at anything mostly as long as you end it with "What do I see?"
62
+
63
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/VREY8QHtH6fCL0fCp8AAC.png)
64
+
65
+ # You can also request to know what a character is thinking or planning.
66
+
67
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/U3RTAgbaB2m1ygfZGJ-SM.png)
68
+
69
+ # You can ask for a quick summary on the character as well.
70
+
71
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/uXFd6GhnXS8w_egUEfcAp.png)
72
+
73
+ # Before continuing the conversation as normal.
74
+
75
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/dYTQUdCshUDtp_BJ20tHy.png)
76
+
77
+ ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )
78
+
79
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/59vi4VWP2d0bCbsW2eU8h.png)
80
+
81
+ If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this.
82
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/mB3wZqtwN8B45nR7W1fgR.png)
83
+
84
+ ```
85
+ {System Prompt}
86
+
87
+ Username: {Input}
88
+ BotName: {Response}
89
+ Username: {Input}
90
+ BotName: {Response}
91
+
92
+ ```
93
+ ## Presets
94
+
95
+ All screenshots above were taken with the below SillyTavern Preset.
96
+ ## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05)
97
+ This is a roughly equivalent Kobold Horde Preset.
98
+ ## Recommended Kobold Horde Preset -> MinP
99
+
100
+
101
+ # Disclaimer
102
+
103
+ Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks!