Text Generation
Safetensors
mistral
conversational
Epiculous commited on
Commit
23e086e
1 Parent(s): 3e4c64b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -31,9 +31,10 @@ Azure Dusk was trained with the Mistral Instruct template, therefore it should b
31
  ```
32
  ### Context and Instruct
33
  [Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json) <br/>
34
- [Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json)
35
-
36
-
 
37
  ### Current Top Sampler Settings
38
  [Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json) - Considered the best settings! <br/>
39
  [Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json)
 
31
  ```
32
  ### Context and Instruct
33
  [Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json) <br/>
34
+ [Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json) <br/>
35
+ *** NOTE *** <br/>
36
+ There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
37
+ If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.
38
  ### Current Top Sampler Settings
39
  [Crimson_Dawn-Nitral-Special](https://files.catbox.moe/8xjxht.json) - Considered the best settings! <br/>
40
  [Crimson_Dawn-Magnum-Style](https://files.catbox.moe/lc59dn.json)