Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,10 @@ I really REALLY like this one lmao.
|
|
10 |
|
11 |
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6370b9c3789970f7bc5c14ac/talJYOBpPSu5fLYcpbnWX.webp)
|
12 |
|
|
|
|
|
|
|
|
|
13 |
# Razrien/L3-8B-Tamamo-v1-Q8_0-GGUF
|
14 |
This model was converted to GGUF format from [`Sao10K/L3-8B-Tamamo-v1`](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1) for more details on the model.
|
|
|
10 |
|
11 |
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6370b9c3789970f7bc5c14ac/talJYOBpPSu5fLYcpbnWX.webp)
|
12 |
|
13 |
+
Works great with just normal llama3 settings and such. I usually have it around 1.1 temp.
|
14 |
+
I feel like this was just an experiment that Sao made on a whim, but it just particularly stood out to me. :D
|
15 |
+
|
16 |
+
|
17 |
# Razrien/L3-8B-Tamamo-v1-Q8_0-GGUF
|
18 |
This model was converted to GGUF format from [`Sao10K/L3-8B-Tamamo-v1`](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
19 |
Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1) for more details on the model.
|