TheBloke commited on
Commit
1874687
1 Parent(s): 6272cf2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -7,7 +7,9 @@ license: mit
7
  model_creator: Nobody.png
8
  model_name: Yi 34B GiftedConvo Llama
9
  model_type: llama
10
- prompt_template: '{prompt}
 
 
11
 
12
  '
13
  quantized_by: TheBloke
@@ -71,10 +73,11 @@ Here is an incomplete list of clients and libraries that are known to support GG
71
  <!-- repositories-available end -->
72
 
73
  <!-- prompt-template start -->
74
- ## Prompt template: Unknown
75
 
76
  ```
77
- {prompt}
 
78
 
79
  ```
80
 
@@ -200,7 +203,7 @@ Windows Command Line users: You can set the environment variable by running `set
200
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
201
 
202
  ```shell
203
- ./main -ngl 32 -m yi-34b-giftedconvo-merged.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
204
  ```
205
 
206
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
7
  model_creator: Nobody.png
8
  model_name: Yi 34B GiftedConvo Llama
9
  model_type: llama
10
+ prompt_template: 'USER: {prompt}
11
+
12
+ ASSISTANT:
13
 
14
  '
15
  quantized_by: TheBloke
 
73
  <!-- repositories-available end -->
74
 
75
  <!-- prompt-template start -->
76
+ ## Prompt template: User-Assistant
77
 
78
  ```
79
+ USER: {prompt}
80
+ ASSISTANT:
81
 
82
  ```
83
 
 
203
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
204
 
205
  ```shell
206
+ ./main -ngl 32 -m yi-34b-giftedconvo-merged.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:"
207
  ```
208
 
209
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.