Text Generation
Transformers
GGUF
English
mergekit
Mixture of Experts
mixture of experts
Merge
4x8B
Llama3 MOE
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
horror
Inference Endpoints
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -10,57 +10,3 @@ tags:
|
|
10 |
- mixture of experts
|
11 |
- merge
|
12 |
---
|
13 |
-
## About
|
14 |
-
|
15 |
-
<!-- ### quantize_version: 2 -->
|
16 |
-
<!-- ### output_tensor_quantised: 1 -->
|
17 |
-
<!-- ### convert_type: hf -->
|
18 |
-
<!-- ### vocab_type: -->
|
19 |
-
<!-- ### tags: -->
|
20 |
-
static quants of https://huggingface.co/DavidAU/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B
|
21 |
-
|
22 |
-
<!-- provided-files -->
|
23 |
-
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
|
24 |
-
## Usage
|
25 |
-
|
26 |
-
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
27 |
-
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
|
28 |
-
more details, including on how to concatenate multi-part files.
|
29 |
-
|
30 |
-
## Provided Quants
|
31 |
-
|
32 |
-
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
33 |
-
|
34 |
-
| Link | Type | Size/GB | Notes |
|
35 |
-
|:-----|:-----|--------:|:------|
|
36 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q2_K.gguf) | Q2_K | 9.4 | |
|
37 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
|
38 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
|
39 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
|
40 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
|
41 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
|
42 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
|
43 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
|
44 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
|
45 |
-
| [GGUF](https://huggingface.co/mradermacher/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF/resolve/main/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
|
46 |
-
|
47 |
-
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
48 |
-
types (lower is better):
|
49 |
-
|
50 |
-
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
|
51 |
-
|
52 |
-
And here are Artefact2's thoughts on the matter:
|
53 |
-
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
|
54 |
-
|
55 |
-
## FAQ / Model Request
|
56 |
-
|
57 |
-
See https://huggingface.co/mradermacher/model_requests for some answers to
|
58 |
-
questions you might have and/or if you want some other model quantized.
|
59 |
-
|
60 |
-
## Thanks
|
61 |
-
|
62 |
-
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
63 |
-
me use its servers and providing upgrades to my workstation to enable
|
64 |
-
this work in my free time.
|
65 |
-
|
66 |
-
<!-- end -->
|
|
|
10 |
- mixture of experts
|
11 |
- merge
|
12 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|