TheBloke commited on
Commit
c41328f
1 Parent(s): b3754cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -43,6 +43,14 @@ GGML versions are not yet provided, as there is not yet support for SuperHOT in
43
  * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-13B-1-3-SuperHOT-8K-fp16)
44
  * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
45
 
 
 
 
 
 
 
 
 
46
  ## How to easily download and use this model in text-generation-webui with ExLlama
47
 
48
  Please make sure you're using the latest version of text-generation-webui
 
43
  * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-13B-1-3-SuperHOT-8K-fp16)
44
  * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-13b-v1.3)
45
 
46
+ ## Prompt template
47
+
48
+ ```
49
+ A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
50
+ USER: prompt
51
+ ASSISTANT:
52
+ ```
53
+
54
  ## How to easily download and use this model in text-generation-webui with ExLlama
55
 
56
  Please make sure you're using the latest version of text-generation-webui