Update README.md
Browse files
README.md
CHANGED
@@ -36,8 +36,6 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
|
|
36 |
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
37 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
38 |
|
39 |
-
None
|
40 |
-
|
41 |
## Repositories available
|
42 |
|
43 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolpin-Llama-13B-GPTQ)
|
|
|
36 |
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
37 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
38 |
|
|
|
|
|
39 |
## Repositories available
|
40 |
|
41 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolpin-Llama-13B-GPTQ)
|