Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ This repo is the result of quantising to 4-bit, 5-bit and 8-bit GGML for CPU (+C
|
|
28 |
|
29 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ).
|
30 |
* [4-bit, 5-bit and 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML).
|
31 |
-
* [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg
|
32 |
|
33 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
34 |
|
|
|
28 |
|
29 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GPTQ).
|
30 |
* [4-bit, 5-bit and 8-bit GGML models for llama.cpp CPU (+CUDA) inference](https://huggingface.co/TheBloke/manticore-13b-chat-pyg-GGML).
|
31 |
+
* [OpenAccess AI Collective's original float16 HF format repo for GPU inference and further conversions](https://huggingface.co/openaccess-ai-collective/manticore-13b-chat-pyg).
|
32 |
|
33 |
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
34 |
|