Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,8 @@ base_model:
|
|
15 |
---
|
16 |
|
17 |
# jsfs11/TemptressTensor-10.7B-v0.1a-GGUF
|
18 |
-
This model was converted to GGUF format from [`jsfs11/
|
19 |
-
Refer to the [original model card](https://huggingface.co/jsfs11/
|
20 |
## Use with llama.cpp
|
21 |
|
22 |
Install llama.cpp through brew.
|
@@ -41,5 +41,5 @@ llama-server --hf-repo jsfs11/temptresstensor-10.7B-v0.1a-Q5_K_M-GGUF --model te
|
|
41 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
42 |
|
43 |
```
|
44 |
-
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m
|
45 |
```
|
|
|
15 |
---
|
16 |
|
17 |
# jsfs11/TemptressTensor-10.7B-v0.1a-GGUF
|
18 |
+
This model was converted to GGUF format from [`jsfs11/TemptressTensor-10.7B-v0.1a`](https://huggingface.co/jsfs11/TemptressTensor-10.7B-v0.1a) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
+
Refer to the [original model card](https://huggingface.co/jsfs11/TemptressTensor-10.7B-v0.1a) for more details on the model.
|
20 |
## Use with llama.cpp
|
21 |
|
22 |
Install llama.cpp through brew.
|
|
|
41 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
42 |
|
43 |
```
|
44 |
+
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m temptresstensor-10.7b-v0.1a.Q5_K_M.gguf -n 128
|
45 |
```
|