Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,6 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
-
- mergekit
|
5 |
- merge
|
6 |
- llama-cpp
|
7 |
- gguf-my-repo
|
@@ -104,46 +103,7 @@ model-index:
|
|
104 |
name: Open LLM Leaderboard
|
105 |
---
|
106 |
|
107 |
-
# fuzzy-mittenz/
|
108 |
-
This model was converted to GGUF format from [`suayptalha/HomerCreativeAnvita-Mix-Qw7B`](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) using llama.cpp
|
109 |
Refer to the [original model card](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) for more details on the model.
|
110 |
|
111 |
-
## Use with llama.cpp
|
112 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
113 |
-
|
114 |
-
```bash
|
115 |
-
brew install llama.cpp
|
116 |
-
|
117 |
-
```
|
118 |
-
Invoke the llama.cpp server or the CLI.
|
119 |
-
|
120 |
-
### CLI:
|
121 |
-
```bash
|
122 |
-
llama-cli --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-Q4_K_M-GGUF --hf-file homercreativeanvita-mix-qw7b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
|
123 |
-
```
|
124 |
-
|
125 |
-
### Server:
|
126 |
-
```bash
|
127 |
-
llama-server --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-Q4_K_M-GGUF --hf-file homercreativeanvita-mix-qw7b-q4_k_m-imat.gguf -c 2048
|
128 |
-
```
|
129 |
-
|
130 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
131 |
-
|
132 |
-
Step 1: Clone llama.cpp from GitHub.
|
133 |
-
```
|
134 |
-
git clone https://github.com/ggerganov/llama.cpp
|
135 |
-
```
|
136 |
-
|
137 |
-
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
138 |
-
```
|
139 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
140 |
-
```
|
141 |
-
|
142 |
-
Step 3: Run inference through the main binary.
|
143 |
-
```
|
144 |
-
./llama-cli --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-Q4_K_M-GGUF --hf-file homercreativeanvita-mix-qw7b-q4_k_m-imat.gguf -p "The meaning to life and the universe is"
|
145 |
-
```
|
146 |
-
or
|
147 |
-
```
|
148 |
-
./llama-server --hf-repo fuzzy-mittenz/HomerCreativeAnvita-Mix-Qw7B-Q4_K_M-GGUF --hf-file homercreativeanvita-mix-qw7b-q4_k_m-imat.gguf -c 2048
|
149 |
-
```
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
tags:
|
|
|
4 |
- merge
|
5 |
- llama-cpp
|
6 |
- gguf-my-repo
|
|
|
103 |
name: Open LLM Leaderboard
|
104 |
---
|
105 |
|
106 |
+
# fuzzy-mittenz/BIZ_HCA-Mix-Qw7B-iQ4_K_M-GGUF
|
107 |
+
This model was converted to GGUF format from [`suayptalha/HomerCreativeAnvita-Mix-Qw7B`](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) using llama.cpp
|
108 |
Refer to the [original model card](https://huggingface.co/suayptalha/HomerCreativeAnvita-Mix-Qw7B) for more details on the model.
|
109 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|