Update README.md
Browse files
README.md
CHANGED
|
@@ -32,8 +32,6 @@ Azure Dusk was trained with the Mistral Instruct template, therefore it should b
|
|
| 32 |
### Context and Instruct
|
| 33 |
[Magnum-123B-Context.json](https://files.catbox.moe/rkyqwg.json) <br/>
|
| 34 |
[Magnum-123B-Instruct.json](https://files.catbox.moe/obb5oe.json) <br/>
|
| 35 |
-
~~[Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json) <br/>~~
|
| 36 |
-
~~[Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json) <br/>~~
|
| 37 |
*** NOTE *** <br/>
|
| 38 |
There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
|
| 39 |
If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.
|
|
|
|
| 32 |
### Context and Instruct
|
| 33 |
[Magnum-123B-Context.json](https://files.catbox.moe/rkyqwg.json) <br/>
|
| 34 |
[Magnum-123B-Instruct.json](https://files.catbox.moe/obb5oe.json) <br/>
|
|
|
|
|
|
|
| 35 |
*** NOTE *** <br/>
|
| 36 |
There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
|
| 37 |
If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.
|