TheBloke commited on
Commit
bcd8006
1 Parent(s): e43fb0b

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -16
README.md CHANGED
@@ -8,15 +8,8 @@ license: apache-2.0
8
  model_creator: MonsterAPI
9
  model_name: Mistral 7B Norobots
10
  model_type: mistral
11
- prompt_template: '<|im_start|>system
12
-
13
- {system_message}<|im_end|>
14
-
15
- <|im_start|>user
16
-
17
- {prompt}<|im_end|>
18
-
19
- <|im_start|>assistant
20
 
21
  '
22
  quantized_by: TheBloke
@@ -85,14 +78,10 @@ Here is an incomplete list of clients and libraries that are known to support GG
85
  <!-- repositories-available end -->
86
 
87
  <!-- prompt-template start -->
88
- ## Prompt template: ChatML
89
 
90
  ```
91
- <|im_start|>system
92
- {system_message}<|im_end|>
93
- <|im_start|>user
94
- {prompt}<|im_end|>
95
- <|im_start|>assistant
96
 
97
  ```
98
 
@@ -211,7 +200,7 @@ Windows Command Line users: You can set the environment variable by running `set
211
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
212
 
213
  ```shell
214
- ./main -ngl 32 -m mistral_7b_norobots.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
215
  ```
216
 
217
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
8
  model_creator: MonsterAPI
9
  model_name: Mistral 7B Norobots
10
  model_type: mistral
11
+ prompt_template: '<|system|> </s> <|user|> {prompt} </s> <|assistant|> {{response}}
12
+ </s>
 
 
 
 
 
 
 
13
 
14
  '
15
  quantized_by: TheBloke
 
78
  <!-- repositories-available end -->
79
 
80
  <!-- prompt-template start -->
81
+ ## Prompt template: NoRobots
82
 
83
  ```
84
+ <|system|> </s> <|user|> {prompt} </s> <|assistant|> {{response}} </s>
 
 
 
 
85
 
86
  ```
87
 
 
200
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
201
 
202
  ```shell
203
+ ./main -ngl 32 -m mistral_7b_norobots.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|system|> </s> <|user|> {prompt} </s> <|assistant|> {{response}} </s>"
204
  ```
205
 
206
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.