Text Generation
Safetensors
mistral
conversational
Epiculous commited on
Commit
d95d928
·
verified ·
1 Parent(s): 42a8e5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -30,8 +30,10 @@ Azure Dusk was trained with the Mistral Instruct template, therefore it should b
30
  "<s>[INST] Prompt goes here [/INST]<\s>"
31
  ```
32
  ### Context and Instruct
33
- [Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json) <br/>
34
- [Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json) <br/>
 
 
35
  *** NOTE *** <br/>
36
  There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
37
  If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.
 
30
  "<s>[INST] Prompt goes here [/INST]<\s>"
31
  ```
32
  ### Context and Instruct
33
+ [Magnum-123B-Context.json](https://files.catbox.moe/rkyqwg.json) <br/>
34
+ [Magnum-123B-Instruct.json](https://files.catbox.moe/obb5oe.json) <br/>
35
+ ~~[Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json) <br/>~~
36
+ ~~[Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json) <br/>~~
37
  *** NOTE *** <br/>
38
  There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
39
  If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.