Add links for GGML and GPTQ versions of the model
Browse files
README.md
CHANGED
@@ -19,7 +19,9 @@ In 4 bit mode, the model fits into 51% of A100 80GB (40.8GB) 41559MiB
|
|
19 |
500gb of RAM/Swap was required to merge the model.
|
20 |
|
21 |
## GGML & GPTQ versions
|
22 |
-
|
|
|
|
|
23 |
|
24 |
# Prompt style
|
25 |
The model was trained with the following prompt style:
|
|
|
19 |
500gb of RAM/Swap was required to merge the model.
|
20 |
|
21 |
## GGML & GPTQ versions
|
22 |
+
Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
|
23 |
+
* https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGML
|
24 |
+
* https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GPTQ
|
25 |
|
26 |
# Prompt style
|
27 |
The model was trained with the following prompt style:
|