inflatebot commited on
Commit
898050f
1 Parent(s): 9b853e7

Updated the readme to remove the formatting diatribe and improve readability.

Browse files
Files changed (1) hide show
  1. README.md +8 -25
README.md CHANGED
@@ -34,35 +34,20 @@ If issues with coherency occur, try *in*creasing MinP or *de*creasing Temperatur
34
 
35
  Other samplers shouldn't be necessary. XTC was shown to break outputs. DRY should be okay if used sparingly. Other penalty-type samplers should probably be avoided.
36
 
37
-
38
  ### Formatting
39
  The base model for Mag Mell is [Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml), and as such ChatML formatting is recommended.
40
- #### After further testing, I can confirm that ChatML works the best. The below can be ignored in the context of this model specifically.
41
-
42
- However, many component models still use Mistral's format. As a result, occasionally the word "user" or "assistant" will appear on the bottom of the screen.
43
-
44
-
45
- __However.__ Some things have come out regarding Mistral's format that should be covered here, and implicates not just Mag Mell, but *all* Mistral-based models since the original Mistral 7B.
46
-
47
- *The following information is as correct as I can get it as of September 20th, 2024*
48
-
49
- We've had Mistral's tokenizer handling and completions format all wrong. *The templates in your frontend are probably wrong right now.*
50
 
51
- MistralAI member Pandora has been going around helping to correct everyone.
52
-
53
- Right now, Pandora has opened PRs for ~~[SillyTavern](https://github.com/SillyTavern/SillyTavern/pull/2883)~~ (**MERGED** to Staging. Update and use Mistral V3-Tekken), ~~[KoboldAI Lite](https://github.com/LostRuins/lite.koboldai.net/pull/87)~~ (**MERGED** to dev branch) and ~~[KoboldCPP chat adapters]~~(https://github.com/LostRuins/koboldcpp/pull/1131). (**MERGED**, staged for next release)
54
-
55
- *When these are merged*, then the templates in them can be assumed to be completely correct.
56
-
57
- Until then, ~~I've [provided templates for SillyTavern on GitHub that should be More Correct than the ones ST currently ships.](https://github.com/inflatebot/SillyTavern-Nemo-Templates)~~ Use Mistral V3-Tekken from SillyTavern Staging.
58
- If you don't want to/can't update, you can get the new prompt template files [here](https://github.com/SillyTavern/SillyTavern/tree/staging/default/content/presets) (in the `context` and `instruct` folders.)
59
-
60
- If you experiment with this, please let me know how it goes! The conversation on how to properly implement Mistral is still ongoing.
61
 
62
  ## Merge Details
63
- Multi-stage SLERP merge, DARE-TIES'd together. Intended to be a general purpose "Best of Nemo" model for any fictional, creative use case. Inspired by hyper-merges like [Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) and [Umbral Mind.](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B)
 
 
 
 
 
64
 
65
- Mag Mell is composed of 3 intermediate parts:
66
 
67
  - Hero (RP, kink/trope coverage): [Chronos Gold](https://huggingface.co/elinas/Chronos-Gold-12B-1.0), [Sunrose](https://huggingface.co/Fizzarolli/MN-12b-Sunrose).
68
 
@@ -72,8 +57,6 @@ Mag Mell is composed of 3 intermediate parts:
72
 
73
  I've been dreaming about this merge since Nemo tunes started coming out in earnest. From our testing, Mag Mell demonstrates worldbuilding capabilities unlike any model in its class, comparable to old adventuring models like Tiefighter, and prose that exhibits minimal "slop" (not bad for no finetuning,) frequently devising electrifying metaphors that left us consistently astonished.
74
 
75
- Use ChatML formatting. Early testing versions had a tendency to leak tokens, but this should be more or less hammered out.
76
-
77
  I don't want to toot my own bugle though; I'm really proud of how this came out, but please leave your feedback, good or bad.
78
 
79
  Special thanks as usual to Toaster for his feedback and Fizz for helping fund compute, as well as the KoboldAI Discord for their resources.
 
34
 
35
  Other samplers shouldn't be necessary. XTC was shown to break outputs. DRY should be okay if used sparingly. Other penalty-type samplers should probably be avoided.
36
 
 
37
  ### Formatting
38
  The base model for Mag Mell is [Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml), and as such ChatML formatting is recommended.
 
 
 
 
 
 
 
 
 
 
39
 
40
+ Early testing versions had a tendency to leak tokens, but this should be more or less hammered out. It recently (12-18-2024) came to attention that Cache Quantization may either cause or exacerbate this issue.
 
 
 
 
 
 
 
 
 
41
 
42
  ## Merge Details
43
+ Mag Mel is a multi-stage merge, Inspired by hyper-merges like [Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) and [Umbral Mind.](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B)
44
+ Intended to be a general purpose "Best of Nemo" model for any fictional, creative use case.
45
+
46
+ 6 models were chosen based on 3 categories; they were then paired up and merged via layer-weighted SLERP to create intermediate "specialists" which are then evaluated in their domain.
47
+ The specialists were then merged into the base via DARE-TIES, with hyperparameters chosen to reduce interference caused by the overlap of the three domains.
48
+ The idea with this approach is to extract the best qualities of each component part, and produce models whose task vectors represent more than the sum of their parts.
49
 
50
+ The three specialists are as follows:
51
 
52
  - Hero (RP, kink/trope coverage): [Chronos Gold](https://huggingface.co/elinas/Chronos-Gold-12B-1.0), [Sunrose](https://huggingface.co/Fizzarolli/MN-12b-Sunrose).
53
 
 
57
 
58
  I've been dreaming about this merge since Nemo tunes started coming out in earnest. From our testing, Mag Mell demonstrates worldbuilding capabilities unlike any model in its class, comparable to old adventuring models like Tiefighter, and prose that exhibits minimal "slop" (not bad for no finetuning,) frequently devising electrifying metaphors that left us consistently astonished.
59
 
 
 
60
  I don't want to toot my own bugle though; I'm really proud of how this came out, but please leave your feedback, good or bad.
61
 
62
  Special thanks as usual to Toaster for his feedback and Fizz for helping fund compute, as well as the KoboldAI Discord for their resources.