Update README.md
Browse files
README.md
CHANGED
@@ -38,10 +38,18 @@ Please read carefully below to see how to use it.
|
|
38 |
## Repositories available
|
39 |
|
40 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ)
|
41 |
-
* [
|
42 |
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16)
|
43 |
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
## How to easily download and use this model in text-generation-webui with ExLlama
|
46 |
|
47 |
Please make sure you're using the latest version of text-generation-webui
|
|
|
38 |
## Repositories available
|
39 |
|
40 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GPTQ)
|
41 |
+
* [4, 5, and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-GGML)
|
42 |
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16)
|
43 |
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
|
44 |
|
45 |
+
## Prompt template: Vicuna
|
46 |
+
|
47 |
+
```
|
48 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
49 |
+
USER: prompt
|
50 |
+
ASSISTANT:
|
51 |
+
```
|
52 |
+
|
53 |
## How to easily download and use this model in text-generation-webui with ExLlama
|
54 |
|
55 |
Please make sure you're using the latest version of text-generation-webui
|