Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
|
|
11 |
pipeline_tag: text-generation
|
12 |
---
|
13 |
|
14 |
-
# MoMonir/CodeQwen1.5-7B-GGUF
|
15 |
This model was converted to GGUF format from [`Qwen/CodeQwen1.5-7B-Chat`](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
Refer to the [original model card](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) for more details on the model.
|
17 |
## Use with llama.cpp
|
@@ -32,7 +32,7 @@ llama-cli --hf-repo MoMonir/CodeQwen1.5-7B-Chat-GGUF --model codeqwen1.5-7b-chat
|
|
32 |
Server:
|
33 |
|
34 |
```bash
|
35 |
-
llama-server --hf-repo MoMonir/CodeQwen1.5-7B-GGUF --model codeqwen1.5-7b-chat.Q5_K_M.gguf -c 2048
|
36 |
```
|
37 |
|
38 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
11 |
pipeline_tag: text-generation
|
12 |
---
|
13 |
|
14 |
+
# MoMonir/CodeQwen1.5-7B-Chat-GGUF
|
15 |
This model was converted to GGUF format from [`Qwen/CodeQwen1.5-7B-Chat`](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
Refer to the [original model card](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) for more details on the model.
|
17 |
## Use with llama.cpp
|
|
|
32 |
Server:
|
33 |
|
34 |
```bash
|
35 |
+
llama-server --hf-repo MoMonir/CodeQwen1.5-7B-Chat-GGUF --model codeqwen1.5-7b-chat.Q5_K_M.gguf -c 2048
|
36 |
```
|
37 |
|
38 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|