InferenceIllusionist commited on
Commit
43bd337
1 Parent(s): 4b29762

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
  license: cc-by-nc-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
+ tags:
4
+ - GGUF
5
+ - iMat
6
+ - llama3
7
  ---
8
+
9
+ ```
10
+ e88 88e d8
11
+ d888 888b 8888 8888 ,"Y88b 888 8e d88
12
+ C8888 8888D 8888 8888 "8" 888 888 88b d88888
13
+ Y888 888P Y888 888P ,ee 888 888 888 888
14
+ "88 88" "88 88" "88 888 888 888 888
15
+ b
16
+ 8b,
17
+
18
+ e88'Y88 d8 888
19
+ d888 'Y ,"Y88b 888,8, d88 ,e e, 888
20
+ C8888 "8" 888 888 " d88888 d88 88b 888
21
+ Y888 ,d ,ee 888 888 888 888 , 888
22
+ "88,d88 "88 888 888 888 "YeeP" 888
23
+
24
+ PROUDLY PRESENTS
25
+ ```
26
+
27
+ ## Llama-3-8B-EGO-iMat-GGUF
28
+
29
+
30
+ Quantized from fp32 with love.
31
+ * Weighted quantizations were calculated using groups_merged.txt with 105 chunks (recommended amount for this file) and n_ctx=512. Special thanks to jukofyork for sharing [this process](https://huggingface.co/jukofyork/WizardLM-2-8x22B-imatrix)
32
+
33
+ <b>**Note - Please use SillyTavern as well as the following prompt format:**</b>
34
+ ```
35
+ [EGO]Name: Character name and then Everything that forms the personality and speech patterns.(i.e. scenario, sample dialogue, character definitions, etc)[/EGO]
36
+ [SEEN]User message.[/SEEN]
37
+ Character Name:
38
+ ```
39
+
40
+
41
+ For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
42
+
43
+ <b>All quants are verified working prior to uploading to repo for your safety and convenience. </b>
44
+
45
+ Please note importance matrix quantizations are a work in progress. IQ4 and above is recommended for best results.
46
+
47
+ Original model card [here](https://huggingface.co/Envoid/Llama-3-8B-EGO) and below
48
+
49
+ ---
50
+
51
+ # This model isn't particularly great. It's just an undercooked experiment.
52
+
53
+ Releasing it anyways just in case it accidentally makes good merge meat.
54
+
55
+ # It also has a tendency to produce mature content without warning.
56
+
57
+ This model is tuned off of the base Llama-3-8B model.
58
+
59
+ I adapted the leaked Undi dataset into training samples for custom formatting. This model pretty much only functions properly in SillyTavern.
60
+
61
+ The formatting has two pairs of pseudotokens
62
+
63
+ ```
64
+ [EGO]Name: Character name and then Everything that forms the personality and speech patterns.(i.e. scenario, sample dialogue, character definitions, etc)[/EGO]
65
+ [SEEN]User message.[/SEEN]
66
+ Character Name:
67
+ ```
68
+
69
+ The self attention modules were fine tuned separately on this dataset and the pseudotokens were chosen because they made logical sense with respect to the character giving a reply without allowing the model to 'connect the dots' during training and figure out that it is indeed an AI language model.
70
+
71
+ After this was done all modules were then finetuned together on the dendrite dataset in order to connect the changes made to the attention modules.
72
+
73
+ So with regards to building a SillyTavern prompt template you basically want the entire story string and any additional stylistic instructions enclosed in the [EGO] tags and then the user messages enclosed in [SEEN] tags.
74
+
75
+ It doesn't give particularly verbose replies unless you're continueing a roleplay with verbose messages. Otherwise it's pretty bad.