Transformers
mpt
Composer
MosaicML
llm-foundry
text-generation-inference
TheBloke commited on
Commit
e93c6f4
1 Parent(s): 561c5ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -62,10 +62,10 @@ prompt
62
 
63
  The base model has an 8K context length. It is not yet confirmed if the 8K context of this model works with the quantised files.
64
 
65
- If it does, [KoboldCpp](https://github.com/LostRuins/koboldcpp) supports 8K context if you manually it to 8K by adjusting the text box above the slider:
66
  ![.](https://i.imgur.com/tEbpeJqm.png)
67
 
68
- It is currently unknown as to whether it is compatible with other clients.
69
 
70
  If you have feedback on this, please let me know.
71
 
 
62
 
63
  The base model has an 8K context length. It is not yet confirmed if the 8K context of this model works with the quantised files.
64
 
65
+ If it does, [KoboldCpp](https://github.com/LostRuins/koboldcpp) supports 8K context if you manually set it to 8K by adjusting the text box above the slider:
66
  ![.](https://i.imgur.com/tEbpeJqm.png)
67
 
68
+ It is currently unknown as to increased context is compatible with other MPT GGML clients.
69
 
70
  If you have feedback on this, please let me know.
71