VlSav commited on
Commit
f46cb52
1 Parent(s): ca1f52e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
  - gguf-my-repo
13
  ---
14
 
15
- # VlSav/saiga_llama3_8b-Q6_K-GGUF
16
  This model was converted to GGUF format from [`IlyaGusev/saiga_llama3_8b`](https://huggingface.co/IlyaGusev/saiga_llama3_8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_llama3_8b) for more details on the model.
18
 
@@ -27,12 +27,12 @@ Invoke the llama.cpp server or the CLI.
27
 
28
  ### CLI:
29
  ```bash
30
- llama-cli --hf-repo VlSav/saiga_llama3_8b-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -p "The meaning to life and the universe is"
31
  ```
32
 
33
  ### Server:
34
  ```bash
35
- llama-server --hf-repo VlSav/saiga_llama3_8b-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -c 2048
36
  ```
37
 
38
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -49,9 +49,9 @@ cd llama.cpp && LLAMA_CURL=1 make
49
 
50
  Step 3: Run inference through the main binary.
51
  ```
52
- ./llama-cli --hf-repo VlSav/saiga_llama3_8b-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -p "The meaning to life and the universe is"
53
  ```
54
  or
55
  ```
56
- ./llama-server --hf-repo VlSav/saiga_llama3_8b-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -c 2048
57
  ```
 
12
  - gguf-my-repo
13
  ---
14
 
15
+ # VlSav/saiga_llama3_8b_v7-Q6_K-GGUF
16
  This model was converted to GGUF format from [`IlyaGusev/saiga_llama3_8b`](https://huggingface.co/IlyaGusev/saiga_llama3_8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/IlyaGusev/saiga_llama3_8b) for more details on the model.
18
 
 
27
 
28
  ### CLI:
29
  ```bash
30
+ llama-cli --hf-repo VlSav/saiga_llama3_8b_v7-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -p "The meaning to life and the universe is"
31
  ```
32
 
33
  ### Server:
34
  ```bash
35
+ llama-server --hf-repo VlSav/saiga_llama3_8b_v7-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -c 2048
36
  ```
37
 
38
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
49
 
50
  Step 3: Run inference through the main binary.
51
  ```
52
+ ./llama-cli --hf-repo VlSav/saiga_llama3_8b_v7-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -p "The meaning to life and the universe is"
53
  ```
54
  or
55
  ```
56
+ ./llama-server --hf-repo VlSav/saiga_llama3_8b_v7-Q6_K-GGUF --hf-file saiga_llama3_8b-q6_k.gguf -c 2048
57
  ```