base_model: | |
- 152334H/miqu-1-70b-sf | |
language: | |
- en | |
library_name: transformers | |
quantized_by: mradermacher | |
tags: | |
- mergekit | |
- merge | |
## About | |
static quants of https://huggingface.co/wolfram/miqu-1-103b | |
weighted/imatrix quants available at https://huggingface.co/mradermacher/miqu-1-103b-i1-GGUF | |
<!-- provided-files --> | |
## Usage | |
If you are unsure how to use GGUF files, refer to one of [TheBloke's | |
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for | |
more details, including on how to concatenate multi-part files. | |
## Provided Quants | |
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | |
| Link | Type | Size/GB | Notes | | |
|:-----|:-----|--------:|:------| | |
| [PART 1](https://huggingface.co/mradermacher/miqu-1-103b-GGUF/resolve/main/miqu-1-103b.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miqu-1-103b-GGUF/resolve/main/miqu-1-103b.Q6_K.gguf.part2of2) | Q6_K | 85.0 | very good quality | | |
Here is a handy graph comparing some lower-quality quant types (lower is better): | |
![image.png](https://cdn-uploads.huggingface.co/production/uploads/645ce413a19f3e64bbeece31/dEiT6xDvxyANdetzVG1tX.png) | |
<!-- end --> | |