Upload README.md
Browse files
README.md
CHANGED
@@ -13,15 +13,12 @@ model_creator: Jeonghwan Park
|
|
13 |
model_name: Pivot 0.1 Evil A
|
14 |
model_type: mistral
|
15 |
pipeline_tag: text-generation
|
16 |
-
prompt_template: '
|
17 |
|
18 |
-
{
|
19 |
|
20 |
-
<|im_start|>user
|
21 |
|
22 |
-
|
23 |
-
|
24 |
-
<|im_start|>assistant
|
25 |
|
26 |
'
|
27 |
quantized_by: TheBloke
|
@@ -87,14 +84,13 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
87 |
<!-- repositories-available end -->
|
88 |
|
89 |
<!-- prompt-template start -->
|
90 |
-
## Prompt template:
|
91 |
|
92 |
```
|
93 |
-
|
94 |
-
{
|
95 |
-
|
96 |
-
|
97 |
-
<|im_start|>assistant
|
98 |
|
99 |
```
|
100 |
|
@@ -213,7 +209,7 @@ Windows Command Line users: You can set the environment variable by running `set
|
|
213 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
214 |
|
215 |
```shell
|
216 |
-
./main -ngl 32 -m pivot-0.1-evil-a.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "
|
217 |
```
|
218 |
|
219 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
|
|
13 |
model_name: Pivot 0.1 Evil A
|
14 |
model_type: mistral
|
15 |
pipeline_tag: text-generation
|
16 |
+
prompt_template: '### Instruction:
|
17 |
|
18 |
+
{prompt}
|
19 |
|
|
|
20 |
|
21 |
+
### Response:
|
|
|
|
|
22 |
|
23 |
'
|
24 |
quantized_by: TheBloke
|
|
|
84 |
<!-- repositories-available end -->
|
85 |
|
86 |
<!-- prompt-template start -->
|
87 |
+
## Prompt template: Alpaca-InstructOnly2
|
88 |
|
89 |
```
|
90 |
+
### Instruction:
|
91 |
+
{prompt}
|
92 |
+
|
93 |
+
### Response:
|
|
|
94 |
|
95 |
```
|
96 |
|
|
|
209 |
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
210 |
|
211 |
```shell
|
212 |
+
./main -ngl 32 -m pivot-0.1-evil-a.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n{prompt}\n\n### Response:"
|
213 |
```
|
214 |
|
215 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|