m0javad commited on
Commit
0cf867c
1 Parent(s): 2b17796

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -21,8 +21,8 @@ co2_eq_emissions:
21
  emissions: 232380
22
  ---
23
 
24
- # m0javad/PersianMind-v1.0-Q8_0-GGUF
25
- This model was converted to GGUF format from [`universitytehran/PersianMind-v1.0`](https://huggingface.co/universitytehran/PersianMind-v1.0) using llama.
26
  Refer to the [original model card](https://huggingface.co/universitytehran/PersianMind-v1.0) for more details on the model.
27
  ## Use with llama.cpp
28
 
@@ -36,13 +36,13 @@ Invoke the llama.cpp server or the CLI.
36
  CLI:
37
 
38
  ```bash
39
- llama-cli --hf-repo m0javad/PersianMind-v1.0-Q8_0-GGUF --model persianmind-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
40
  ```
41
 
42
  Server:
43
 
44
  ```bash
45
- llama-server --hf-repo m0javad/PersianMind-v1.0-Q8_0-GGUF --model persianmind-v1.0.Q8_0.gguf -c 2048
46
  ```
47
 
48
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
21
  emissions: 232380
22
  ---
23
 
24
+ # SmartGitiCorp/PersianMind-v1.0-Q8_0-GGUF
25
+ This model was converted to GGUF format from [`universitytehran/PersianMind-v1.0`](https://huggingface.co/universitytehran/PersianMind-v1.0) using llama by [`M0javad`](https://huggingface.co/m0javad).
26
  Refer to the [original model card](https://huggingface.co/universitytehran/PersianMind-v1.0) for more details on the model.
27
  ## Use with llama.cpp
28
 
 
36
  CLI:
37
 
38
  ```bash
39
+ llama-cli --hf-repo SmartGitiCorp/PersianMind-v1.0-Q8_0-GGUF --model persianmind-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
40
  ```
41
 
42
  Server:
43
 
44
  ```bash
45
+ llama-server --hf-repo SmartGitiCorp/PersianMind-v1.0-Q8_0-GGUF --model persianmind-v1.0.Q8_0.gguf -c 2048
46
  ```
47
 
48
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.