pewa po
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,8 +14,8 @@ tags:
|
|
14 |
base_model: TheBossLevel123/Llama3-Toxic-8B-Float16
|
15 |
---
|
16 |
|
17 |
-
# pewapo6599/Llama3-Toxic-8B-
|
18 |
-
This model was converted to GGUF format from [`TheBossLevel123/Llama3-Toxic-8B-Float16`](https://huggingface.co/TheBossLevel123/Llama3-Toxic-8B-Float16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/TheBossLevel123/Llama3-Toxic-8B-Float16) for more details on the model.
|
20 |
|
21 |
## Use with llama.cpp
|
|
|
14 |
base_model: TheBossLevel123/Llama3-Toxic-8B-Float16
|
15 |
---
|
16 |
|
17 |
+
# pewapo6599/Llama3-Toxic-8B-imat-Q5_K_M-GGUF
|
18 |
+
This model was converted (imatrix quants)to GGUF format from [`TheBossLevel123/Llama3-Toxic-8B-Float16`](https://huggingface.co/TheBossLevel123/Llama3-Toxic-8B-Float16) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/TheBossLevel123/Llama3-Toxic-8B-Float16) for more details on the model.
|
20 |
|
21 |
## Use with llama.cpp
|