Update README.md
Browse files
README.md
CHANGED
@@ -3,9 +3,8 @@ This is the Q3_K_M GGUF port of the [lightblue/Karasu-Mixtral-8x22B-v0.1](https:
|
|
3 |
|
4 |
### How to use
|
5 |
|
6 |
-
The easiest way to run this would be to download [LM Studio](https://lmstudio.ai/) and search for this model on the search bar.
|
7 |
|
8 |
-
|
9 |
|
10 |
```bash
|
11 |
git clone https://github.com/ggerganov/llama.cpp
|
@@ -15,6 +14,9 @@ huggingface-cli download lightblue/Karasu-Mixtral-8x22B-v0.1-gguf --local-dir /
|
|
15 |
./main -m /some/folder/Karasu-Mixtral-8x22B-v0.1-Q3_K_M-00001-of-00005.gguf -p "<s>[INST] Tell me a really funny joke. No puns! [/INST]" -n 256 -e
|
16 |
```
|
17 |
|
|
|
|
|
|
|
18 |
### Commands to make this:
|
19 |
|
20 |
```bash
|
|
|
3 |
|
4 |
### How to use
|
5 |
|
|
|
6 |
|
7 |
+
The way to run this directly are using the llama.cpp package.
|
8 |
|
9 |
```bash
|
10 |
git clone https://github.com/ggerganov/llama.cpp
|
|
|
14 |
./main -m /some/folder/Karasu-Mixtral-8x22B-v0.1-Q3_K_M-00001-of-00005.gguf -p "<s>[INST] Tell me a really funny joke. No puns! [/INST]" -n 256 -e
|
15 |
```
|
16 |
|
17 |
+
If you would like a nice easy GUI and have >64GB of RAM, then you could also run this using [LM Studio](https://lmstudio.ai/) and search for this model on the search bar.
|
18 |
+
|
19 |
+
|
20 |
### Commands to make this:
|
21 |
|
22 |
```bash
|