Upload README.md
Browse files
README.md
CHANGED
@@ -58,7 +58,6 @@ These files were quantised using hardware kindly provided by [Massed Compute](ht
|
|
58 |
<!-- repositories-available start -->
|
59 |
## Repositories available
|
60 |
|
61 |
-
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OrcaMaid-13B-AWQ)
|
62 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OrcaMaid-13B-GPTQ)
|
63 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF)
|
64 |
* [ddh0's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ddh0/OrcaMaid-13b)
|
|
|
58 |
<!-- repositories-available start -->
|
59 |
## Repositories available
|
60 |
|
|
|
61 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OrcaMaid-13B-GPTQ)
|
62 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OrcaMaid-13B-GGUF)
|
63 |
* [ddh0's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ddh0/OrcaMaid-13b)
|