Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ converted to GGML and quantized to 4 bit, ready to be used with [llama.cpp](http
|
|
12 |
|
13 |
In order to use this model with llama.cpp
|
14 |
|
15 |
-
* install
|
16 |
* download this model
|
17 |
* move it into the `models` subfolder of llama.cpp
|
18 |
* run inferences with the additional parameter `-m ./models/7B/ggml-openllama-7b-300bt-q4_0.bin`
|
|
|
12 |
|
13 |
In order to use this model with llama.cpp
|
14 |
|
15 |
+
* install llama.cpp as [described in the docs](https://github.com/ggerganov/llama.cpp#usage)
|
16 |
* download this model
|
17 |
* move it into the `models` subfolder of llama.cpp
|
18 |
* run inferences with the additional parameter `-m ./models/7B/ggml-openllama-7b-300bt-q4_0.bin`
|