Update README.md
Browse files
README.md
CHANGED
@@ -23,12 +23,22 @@ These files are GPTQ 4bit model files for [LmSys' Vicuna 33B (final)](https://hu
|
|
23 |
|
24 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
|
|
|
|
|
26 |
## Repositories available
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GGML)
|
30 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3)
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
## How to easily download and use this model in text-generation-webui
|
33 |
|
34 |
Please make sure you're using the latest version of text-generation-webui
|
|
|
23 |
|
24 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
25 |
|
26 |
+
This is the final version of Vicuna 33B, replacing the preview version previously released.
|
27 |
+
|
28 |
## Repositories available
|
29 |
|
30 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GPTQ)
|
31 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/vicuna-33B-GGML)
|
32 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3)
|
33 |
|
34 |
+
## Prompt template
|
35 |
+
|
36 |
+
```
|
37 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
|
38 |
+
USER: prompt
|
39 |
+
ASSISTANT:
|
40 |
+
```
|
41 |
+
|
42 |
## How to easily download and use this model in text-generation-webui
|
43 |
|
44 |
Please make sure you're using the latest version of text-generation-webui
|