Update README.md
Browse files
README.md
CHANGED
@@ -37,10 +37,10 @@ Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for pro
|
|
37 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML)
|
38 |
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
|
39 |
|
40 |
-
## Prompt template:
|
41 |
|
42 |
```
|
43 |
-
{prompt}
|
44 |
```
|
45 |
|
46 |
## Provided files
|
@@ -128,8 +128,7 @@ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
|
128 |
"""
|
129 |
|
130 |
prompt = "Tell me about AI"
|
131 |
-
prompt_template=f'''{prompt}
|
132 |
-
'''
|
133 |
|
134 |
print("\n\n*** Generate:")
|
135 |
|
|
|
37 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML)
|
38 |
* [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
|
39 |
|
40 |
+
## Prompt template: OpenAssistant
|
41 |
|
42 |
```
|
43 |
+
<|prompter|>{prompt} <|endoftext|> <|assistant|>
|
44 |
```
|
45 |
|
46 |
## Provided files
|
|
|
128 |
"""
|
129 |
|
130 |
prompt = "Tell me about AI"
|
131 |
+
prompt_template=f'''<|prompter|>{prompt} <|endoftext|> <|assistant|>'''
|
|
|
132 |
|
133 |
print("\n\n*** Generate:")
|
134 |
|