Text Generation
Transformers
Safetensors
gpt_bigcode
code
text-generation-inference
4-bit precision
gptq
4 papers
TheBloke commited on
Commit
22de121
1 Parent(s): 0a9aa28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -73,7 +73,7 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
73
 
74
  ## Note about context length
75
 
76
- It is currently untested as to whether the 8K context is compatible with available clients such as text-generation-webui.
77
 
78
  If you have feedback on this, please let me know.
79
 
 
73
 
74
  ## Note about context length
75
 
76
+ It is currently untested as to whether the 8K context is compatible with available GPTQ clients such as text-generation-webui.
77
 
78
  If you have feedback on this, please let me know.
79