Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,17 @@ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com
|
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GGML)
|
30 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VMware/open-llama-7b-open-instruct)
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
## How to easily download and use this model in text-generation-webui
|
33 |
|
34 |
Please make sure you're using the latest version of text-generation-webui
|
|
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/open-llama-7b-open-instruct-GGML)
|
30 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VMware/open-llama-7b-open-instruct)
|
31 |
|
32 |
+
## Prompt template
|
33 |
+
|
34 |
+
Standard Alpaca:
|
35 |
+
|
36 |
+
```
|
37 |
+
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
38 |
+
|
39 |
+
### Instruction: prompt
|
40 |
+
### Response:"
|
41 |
+
```
|
42 |
+
|
43 |
## How to easily download and use this model in text-generation-webui
|
44 |
|
45 |
Please make sure you're using the latest version of text-generation-webui
|