Text Generation
Transformers
PyTorch
English
llama
causal-lm
text-generation-inference
Inference Endpoints
TheBloke commited on
Commit
8f3a676
1 Parent(s): ca4832f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -23,6 +23,15 @@ It is the result of merging the deltas from the above repository with the origin
23
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
24
  * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
25
 
 
 
 
 
 
 
 
 
 
26
  # Original StableVicuna-13B model card
27
 
28
  ## Model Description
 
23
  * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
24
  * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
25
 
26
+ ## PROMPT TEMPLATE
27
+
28
+ This model requires the following prompt template:
29
+
30
+ ```
31
+ ### Human: your prompt here
32
+ ### Assistant:
33
+ ```
34
+
35
  # Original StableVicuna-13B model card
36
 
37
  ## Model Description