Upload folder using huggingface_hub
Browse files- .gitattributes +2 -0
- Cydonia-v1.3-Magnum-v4-22B.imatrix.dat +3 -0
- README.md +30 -0
- cydonia-v1.3-magnum-v4-22b-i1-IQ1_S.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Cydonia-v1.3-Magnum-v4-22B.imatrix.dat filter=lfs diff=lfs merge=lfs -text
|
37 |
+
cydonia-v1.3-magnum-v4-22b-i1-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
Cydonia-v1.3-Magnum-v4-22B.imatrix.dat
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8ee5d875a952ff43fde6cb85615be0280157245d0f4826492d85d484bbb9fdeb
|
3 |
+
size 11940569
|
README.md
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: knifeayumu/Cydonia-v1.3-Magnum-v4-22B
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
license: mit
|
6 |
+
quantized_by: SpongeQuant
|
7 |
+
tags:
|
8 |
+
- SpongeQuant
|
9 |
+
- i1-GGUF
|
10 |
+
---
|
11 |
+
|
12 |
+
|
13 |
+
Quantized to `i1-GGUF` using [SpongeQuant](https://github.com/SpongeEngine/SpongeQuant), the Oobabooga of LLM quantization. Chat & support at [Sponge Engine](https://discord.gg/azNmr2Gdgy).
|
14 |
+
|
15 |
+
<figure>
|
16 |
+
<img src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/028.png" alt="28. Conception">
|
17 |
+
<figcaption>28. Conception</figcaption>
|
18 |
+
</figure>
|
19 |
+
|
20 |
+
<figure>
|
21 |
+
<audio controls>
|
22 |
+
<source src="https://huggingface.co/spaces/SpongeEngine/README/resolve/main/019.mp3" type="audio/mp3">
|
23 |
+
Your browser does not support the audio element.
|
24 |
+
</audio>
|
25 |
+
<figcaption>19. Muğam – Kamil Jalilov</figcaption>
|
26 |
+
</figure>
|
27 |
+
|
28 |
+
***
|
29 |
+
### What is a GGUF?
|
30 |
+
GGUF is a type of file format used for running LLMs (large language models) on different types of computers. It works on both regular processors (CPU) and graphics cards (GPU). Some LLMs need powerful and expensive hardware, but GGUF makes it possible to run them on a wider range of computers, even ones without high-end GPUs. To make this possible, GGUF models use a technique called quantization, which reduces their size and memory usage. This helps them run more efficiently, but at lower settings, the model might lose some accuracy or detail in its responses.
|
cydonia-v1.3-magnum-v4-22b-i1-IQ1_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d185b13eff3beeab4affc30d56721ad0c9795376c7329abc3d673d434bf2dd0a
|
3 |
+
size 4829492672
|