Clevyby mradermacher commited on
Commit
ccfaf09
0 Parent(s):

Duplicate from mradermacher/MythoMax-L2-13b-GGUF

Browse files

Co-authored-by: Michael Radermacher <mradermacher@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ MythoMax-L2-13b.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
37
+ MythoMax-L2-13b.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ MythoMax-L2-13b.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ MythoMax-L2-13b.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ MythoMax-L2-13b.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ MythoMax-L2-13b.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
42
+ MythoMax-L2-13b.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
43
+ MythoMax-L2-13b.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ MythoMax-L2-13b.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ MythoMax-L2-13b.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
46
+ MythoMax-L2-13b.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
47
+ MythoMax-L2-13b.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
48
+ MythoMax-L2-13b.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
49
+ MythoMax-L2-13b.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
MythoMax-L2-13b.IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8927fc3f0e589fc1722f5225b929bd8fd7f7eb900f8918c21e2bac61a2655c7
3
+ size 6177789920
MythoMax-L2-13b.IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ee9cc696742f02cc8d619d25de883e1f1f0ef1e5aa4847d0cc04d45afc4d64b
3
+ size 5852260320
MythoMax-L2-13b.IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe3e3fd4ce272c8328ac428f0c6bb2e7f738de7a7ce43d86cabeaed791b2f4bc
3
+ size 5554890720
MythoMax-L2-13b.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:614cf3d32a408997e91271766a1d1ca407de274b224a45ec9aefe918542d78c1
3
+ size 7212797920
MythoMax-L2-13b.Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24e3d38aad286f1e707bb533f67ceed02661f3692c7f9470273ee042d7b4f5c6
3
+ size 5047549920
MythoMax-L2-13b.Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4a3ac5038634a4a643aa5781e4e7e3f0bd4c03718537e1ddba961a751487e29
3
+ size 7122839520
MythoMax-L2-13b.Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f91b215b3e44528575cd614a6a437bdde0934472837568177fb64d578ec59e4
3
+ size 6531049440
MythoMax-L2-13b.Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b05723c19e10b4be296477f6dfd1e6f2473cde19c9c03988a0b9c2218747555d
3
+ size 5852260320
MythoMax-L2-13b.Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd380778890370174190012285ee1d528a77895ba8a8c3ab5de665aacebe7947
3
+ size 8059236320
MythoMax-L2-13b.Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:641dd62b675e595555a39726aec5cb76b249d3f7be4fdf51236bd13edd431849
3
+ size 7616458720
MythoMax-L2-13b.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5919740d26d04e81a089859bdfcf43bd3ed3196ea9833f9b8ca0ef8930649f0
3
+ size 9423204320
MythoMax-L2-13b.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df823ba7c9881118216de3adf9bb9beca1bc444c680aa3a7c9c3656697d06e98
3
+ size 9165565920
MythoMax-L2-13b.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43febe140d7f4191228e8e2caa437fe5c57619a319ce970bf1c652fde328678a
3
+ size 10872420320
MythoMax-L2-13b.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0fe0b2c98087d03be45d2116f856c096ced86c7a3c41999ad9ca041c639aa44
3
+ size 13984919520
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Gryphe/MythoMax-L2-13b
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ license: other
7
+ quantized_by: mradermacher
8
+ ---
9
+ ## About
10
+
11
+ static quants of https://huggingface.co/Gryphe/MythoMax-L2-13b
12
+
13
+ <!-- provided-files -->
14
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/MythoMax-L2-13b-i1-GGUF
15
+ ## Usage
16
+
17
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
18
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
19
+ more details, including on how to concatenate multi-part files.
20
+
21
+ ## Provided Quants
22
+
23
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
24
+
25
+ | Link | Type | Size/GB | Notes |
26
+ |:-----|:-----|--------:|:------|
27
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q2_K.gguf) | Q2_K | 5.1 | |
28
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.IQ3_XS.gguf) | IQ3_XS | 5.7 | |
29
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* |
30
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q3_K_S.gguf) | Q3_K_S | 6.0 | |
31
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.IQ3_M.gguf) | IQ3_M | 6.3 | |
32
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality |
33
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q3_K_L.gguf) | Q3_K_L | 7.2 | |
34
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.IQ4_XS.gguf) | IQ4_XS | 7.3 | |
35
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended |
36
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended |
37
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q5_K_S.gguf) | Q5_K_S | 9.3 | |
38
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q5_K_M.gguf) | Q5_K_M | 9.5 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q6_K.gguf) | Q6_K | 11.0 | very good quality |
40
+ | [GGUF](https://huggingface.co/mradermacher/MythoMax-L2-13b-GGUF/resolve/main/MythoMax-L2-13b.Q8_0.gguf) | Q8_0 | 14.1 | fast, best quality |
41
+
42
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
43
+ types (lower is better):
44
+
45
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
46
+
47
+ And here are Artefact2's thoughts on the matter:
48
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
49
+
50
+ ## FAQ / Model Request
51
+
52
+ See https://huggingface.co/mradermacher/model_requests for some answers to
53
+ questions you might have and/or if you want some other model quantized.
54
+
55
+ ## Thanks
56
+
57
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
58
+ me use its servers and providing upgrades to my workstation to enable
59
+ this work in my free time.
60
+
61
+ <!-- end -->