JustinLin610 commited on
Commit
d448a78
1 Parent(s): 25e938c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -4
README.md CHANGED
@@ -39,10 +39,6 @@ Cloning the repo may be inefficient, and thus you can manually download the GGUF
39
  huggingface-cli download Qwen/Qwen2-0.5B-Instruct-GGUF qwen2-0.5b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
40
  ```
41
 
42
- With the upgrade of APIs of llama.cpp, `llama-gguf-split` is equivalent to the previous `gguf-split`.
43
- For the arguments of this command, the first is the path to the first split GGUF file, and the second is the path to the output GGUF file.
44
-
45
-
46
  To run Qwen2, you can use `llama-cli` (the previous `main`) or `llama-server` (the previous `server`).
47
  We recommend using the `llama-server` as it is simple and compatible with OpenAI API. For example:
48
 
 
39
  huggingface-cli download Qwen/Qwen2-0.5B-Instruct-GGUF qwen2-0.5b-instruct-q5_k_m.gguf --local-dir . --local-dir-use-symlinks False
40
  ```
41
 
 
 
 
 
42
  To run Qwen2, you can use `llama-cli` (the previous `main`) or `llama-server` (the previous `server`).
43
  We recommend using the `llama-server` as it is simple and compatible with OpenAI API. For example:
44