Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ license_link: https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LI
|
|
15 |
pipeline_tag: text-generation
|
16 |
---
|
17 |
|
18 |
-
# MoMonir/wavecoder-ultra-6.7b-GGUF
|
19 |
This model was converted to GGUF format from [`microsoft/wavecoder-ultra-6.7b`](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) for more details on the model.
|
21 |
## Use with llama.cpp
|
@@ -30,17 +30,17 @@ Invoke the llama.cpp server or the CLI.
|
|
30 |
CLI:
|
31 |
|
32 |
```bash
|
33 |
-
llama-cli --hf-repo MoMonir/wavecoder-ultra-6.7b-GGUF --model wavecoder-ultra-6.7b.
|
34 |
```
|
35 |
|
36 |
Server:
|
37 |
|
38 |
```bash
|
39 |
-
llama-server --hf-repo MoMonir/wavecoder-ultra-6.7b-GGUF --model wavecoder-ultra-6.7b.
|
40 |
```
|
41 |
|
42 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
43 |
|
44 |
```
|
45 |
-
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m wavecoder-ultra-6.7b.
|
46 |
```
|
|
|
15 |
pipeline_tag: text-generation
|
16 |
---
|
17 |
|
18 |
+
# MoMonir/wavecoder-ultra-6.7b-Q4_K_M-GGUF
|
19 |
This model was converted to GGUF format from [`microsoft/wavecoder-ultra-6.7b`](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
20 |
Refer to the [original model card](https://huggingface.co/microsoft/wavecoder-ultra-6.7b) for more details on the model.
|
21 |
## Use with llama.cpp
|
|
|
30 |
CLI:
|
31 |
|
32 |
```bash
|
33 |
+
llama-cli --hf-repo MoMonir/wavecoder-ultra-6.7b-Q4_K_M-GGUF --model wavecoder-ultra-6.7b.Q4_K_M.gguf -p "The meaning to life and the universe is"
|
34 |
```
|
35 |
|
36 |
Server:
|
37 |
|
38 |
```bash
|
39 |
+
llama-server --hf-repo MoMonir/wavecoder-ultra-6.7b-Q4_K_M-GGUF --model wavecoder-ultra-6.7b.Q4_K_M.gguf -c 2048
|
40 |
```
|
41 |
|
42 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
43 |
|
44 |
```
|
45 |
+
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m wavecoder-ultra-6.7b.Q4_K_M.gguf -n 128
|
46 |
```
|