TheBloke commited on
Commit
7184369
·
1 Parent(s): d0fc24d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -36,8 +36,6 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
36
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
37
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
38
 
39
- None
40
-
41
  ## Repositories available
42
 
43
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolpin-Llama-13B-GPTQ)
 
36
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
37
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
38
 
 
 
39
  ## Repositories available
40
 
41
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Dolpin-Llama-13B-GPTQ)