fuzzy-mittenz commited on
Commit
7cc3cb8
·
verified ·
1 Parent(s): c4b5a5c

Update README.md

Browse files

![jabberwocki.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/CblqVOZLgAAURnPH8qZYU.png)

Files changed (1) hide show
  1. README.md +8 -33
README.md CHANGED
@@ -18,8 +18,14 @@ datasets:
18
  - Alignment-Lab-AI/orcamath-sharegpt
19
  ---
20
 
21
- # fuzzy-mittenz/Q25-1.5B-VeoLu-Q8_0-GGUF
22
- This model was converted to GGUF format from [`Alfitaria/Q25-1.5B-VeoLu`](https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
 
 
 
 
 
 
23
  Refer to the [original model card](https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu) for more details on the model.
24
 
25
  ## Use with llama.cpp
@@ -30,34 +36,3 @@ brew install llama.cpp
30
 
31
  ```
32
  Invoke the llama.cpp server or the CLI.
33
-
34
- ### CLI:
35
- ```bash
36
- llama-cli --hf-repo fuzzy-mittenz/Q25-1.5B-VeoLu-Q8_0-GGUF --hf-file q25-1.5b-veolu-q8_0.gguf -p "The meaning to life and the universe is"
37
- ```
38
-
39
- ### Server:
40
- ```bash
41
- llama-server --hf-repo fuzzy-mittenz/Q25-1.5B-VeoLu-Q8_0-GGUF --hf-file q25-1.5b-veolu-q8_0.gguf -c 2048
42
- ```
43
-
44
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
45
-
46
- Step 1: Clone llama.cpp from GitHub.
47
- ```
48
- git clone https://github.com/ggerganov/llama.cpp
49
- ```
50
-
51
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
52
- ```
53
- cd llama.cpp && LLAMA_CURL=1 make
54
- ```
55
-
56
- Step 3: Run inference through the main binary.
57
- ```
58
- ./llama-cli --hf-repo fuzzy-mittenz/Q25-1.5B-VeoLu-Q8_0-GGUF --hf-file q25-1.5b-veolu-q8_0.gguf -p "The meaning to life and the universe is"
59
- ```
60
- or
61
- ```
62
- ./llama-server --hf-repo fuzzy-mittenz/Q25-1.5B-VeoLu-Q8_0-GGUF --hf-file q25-1.5b-veolu-q8_0.gguf -c 2048
63
- ```
 
18
  - Alignment-Lab-AI/orcamath-sharegpt
19
  ---
20
 
21
+
22
+ # IntelligentEstate/Jaberwocky-VEGA-qwn25-Q_8_0-GGUF
23
+
24
+ Jaberwocky is a Small edge assistant model
25
+
26
+ ![jabberwocki.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/CblqVOZLgAAURnPH8qZYU.png)
27
+
28
+ This model was converted to GGUF format from [`Alfitaria/Q25-1.5B-VeoLu`](https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu) using llama.cpp
29
  Refer to the [original model card](https://huggingface.co/Alfitaria/Q25-1.5B-VeoLu) for more details on the model.
30
 
31
  ## Use with llama.cpp
 
36
 
37
  ```
38
  Invoke the llama.cpp server or the CLI.