mindrage commited on
Commit
7d2975c
1 Parent(s): e5abab6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -6,16 +6,16 @@ tags:
6
 
7
  ---
8
  ---
9
- # 4bit GGML of:
10
  Manticore-13b-Chat-Pyg by [openaccess-ai-collective](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) with the Guanaco 13b qLoRa by [TimDettmers](https://huggingface.co/timdettmers/guanaco-13b) applied through [Monero](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco), quantized by [mindrage](https://huggingface.co/mindrage), uncensored
11
-
12
 
13
  [link to GPTQ Version](https://huggingface.co/mindrage/Manticore-13B-Chat-Pyg-Guanaco-GPTQ-4bit-128g.no-act-order.safetensors)
14
 
15
  ---
16
 
17
 
18
- Quantized to 4bit GGML (4_0) using the newest llama.cpp and will therefore only work with llama.cpp versions compiled after May 19th, 2023.
19
 
20
 
21
  The model seems to have noticeably benefited from further augmentation with the Guanaco qLora.
 
6
 
7
  ---
8
  ---
9
+ # GGML of:
10
  Manticore-13b-Chat-Pyg by [openaccess-ai-collective](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg) with the Guanaco 13b qLoRa by [TimDettmers](https://huggingface.co/timdettmers/guanaco-13b) applied through [Monero](https://huggingface.co/Monero/Manticore-13b-Chat-Pyg-Guanaco), quantized by [mindrage](https://huggingface.co/mindrage), uncensored
11
+ (q4_0, q5_0 and q8_0 versions available)
12
 
13
  [link to GPTQ Version](https://huggingface.co/mindrage/Manticore-13B-Chat-Pyg-Guanaco-GPTQ-4bit-128g.no-act-order.safetensors)
14
 
15
  ---
16
 
17
 
18
+ Files are quantized using the newest llama.cpp and will therefore only work with llama.cpp versions compiled after May 19th, 2023.
19
 
20
 
21
  The model seems to have noticeably benefited from further augmentation with the Guanaco qLora.