huybery kepkar commited on
Commit
1848b83
1 Parent(s): cc67cd2

Model name typo in README (#1)

Browse files

- Model name typo in README (f472395fb52dd4843fbb660cd6b96acf1739856c)


Co-authored-by: Kolya <kepkar@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -36,12 +36,12 @@ We advise you to clone [`llama.cpp`](https://github.com/ggerganov/llama.cpp) and
36
  ## How to use
37
  Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
38
  ```shell
39
- huggingface-cli download Qwen/CodeQwen1.5-7B-Chat-GGUF codeqwen1_5-7b-chat-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
40
  ```
41
 
42
  We demonstrate how to use `llama.cpp` to run Qwen1.5:
43
  ```shell
44
- ./main -m codeqwen1_5-7b-chat-q5_k_m.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
45
  ```
46
 
47
 
 
36
  ## How to use
37
  Cloning the repo may be inefficient, and thus you can manually download the GGUF file that you need or use `huggingface-cli` (`pip install huggingface_hub`) as shown below:
38
  ```shell
39
+ huggingface-cli download Qwen/CodeQwen1.5-7B-Chat-GGUF codeqwen-1_5-7b-chat-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
40
  ```
41
 
42
  We demonstrate how to use `llama.cpp` to run Qwen1.5:
43
  ```shell
44
+ ./main -m codeqwen-1_5-7b-chat-q5_k_m.gguf -n 512 --color -i -cml -f prompts/chat-with-qwen.txt
45
  ```
46
 
47