leafspark commited on
Commit
ecf025e
1 Parent(s): ec756cb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - llama-cpp
5
+ - gguf-my-repo
6
+ ---
7
+
8
+ # leafspark/Yi-1.5-34B-Chat-16K-Q2_K-GGUF
9
+ This model was converted to GGUF format from [`01-ai/Yi-1.5-34B-Chat-16K`](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
10
+ Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-34B-Chat-16K) for more details on the model.
11
+ ## Use with llama.cpp
12
+
13
+ Install llama.cpp through brew.
14
+
15
+ ```bash
16
+ brew install ggerganov/ggerganov/llama.cpp
17
+ ```
18
+ Invoke the llama.cpp server or the CLI.
19
+
20
+ CLI:
21
+
22
+ ```bash
23
+ llama-cli --hf-repo leafspark/Yi-1.5-34B-Chat-16K-Q2_K-GGUF --model yi-1.5-34b-chat-16k.Q2_K.gguf -p "The meaning to life and the universe is"
24
+ ```
25
+
26
+ Server:
27
+
28
+ ```bash
29
+ llama-server --hf-repo leafspark/Yi-1.5-34B-Chat-16K-Q2_K-GGUF --model yi-1.5-34b-chat-16k.Q2_K.gguf -c 2048
30
+ ```
31
+
32
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
33
+
34
+ ```
35
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-34b-chat-16k.Q2_K.gguf -n 128
36
+ ```