TheBloke commited on
Commit
4c89eae
1 Parent(s): 8b50f4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -32,6 +32,13 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
32
  * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGML)
33
  * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16)
34
 
 
 
 
 
 
 
 
35
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
36
 
37
  llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
@@ -53,10 +60,7 @@ I have quantised the GGML files in this repo with the latest version. Therefore
53
  I use the following command line; adjust for your tastes and needs:
54
 
55
  ```
56
- ./main -t 12 -m WizardLM-13B-1.0.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
57
- ### Instruction:
58
- Write a story about llamas
59
- ### Response:"
60
  ```
61
  Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
62
 
 
32
  * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GGML)
33
  * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/wizardLM-13B-1.0-fp16)
34
 
35
+ ## Prompt Template
36
+
37
+ ```
38
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
39
+ USER: prompt goes here
40
+ ASSISTANT:
41
+ ```
42
  ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
43
 
44
  llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
 
60
  I use the following command line; adjust for your tastes and needs:
61
 
62
  ```
63
+ ./main -t 12 -m WizardLM-13B-1.0.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: write a story about llamas ASSISTANT:"
 
 
 
64
  ```
65
  Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
66