Transformers
5 languages
Mixture of Experts
mixtral
sharegpt
axolotl
Inference Endpoints
mradermacher commited on
Commit
66981ac
1 Parent(s): 10a9da4

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -5,6 +5,10 @@ datasets:
5
  - microsoft/orca-math-word-problems-200k
6
  - teknium/OpenHermes-2.5
7
  language:
 
 
 
 
8
  - en
9
  library_name: transformers
10
  license: apache-2.0
@@ -54,7 +58,6 @@ more details, including on how to concatenate multi-part files.
54
  | [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
55
  | [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
56
 
57
-
58
  Here is a handy graph by ikawrakow comparing some lower-quality quant
59
  types (lower is better):
60
 
 
5
  - microsoft/orca-math-word-problems-200k
6
  - teknium/OpenHermes-2.5
7
  language:
8
+ - fr
9
+ - it
10
+ - de
11
+ - es
12
  - en
13
  library_name: transformers
14
  license: apache-2.0
 
58
  | [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q6_K.gguf.part3of3) | Q6_K | 115.6 | very good quality |
59
  | [PART 1](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/Goku-8x22B-v0.2-GGUF/resolve/main/Goku-8x22B-v0.2.Q8_0.gguf.part4of4) | Q8_0 | 149.5 | fast, best quality |
60
 
 
61
  Here is a handy graph by ikawrakow comparing some lower-quality quant
62
  types (lower is better):
63