mradermacher's picture
Upload README.md with huggingface_hub
9a68d2e verified
|
raw
history blame
No virus
3.84 kB
metadata
base_model:
  - 152334H/miqu-1-70b-sf
  - lizpreciatior/lzlv_70b_fp16_hf
language:
  - en
library_name: transformers
quantized_by: mradermacher
tags:
  - mergekit
  - merge

About

weighted/imatrix quants of https://huggingface.co/wolfram/miquliz-120b-v2.0

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF i1-IQ1_S 25.6
GGUF i1-IQ2_XXS 32.1
GGUF i1-IQ2_XS 35.7
GGUF i1-Q2_K 44.5 IQ3_XXS probably better
GGUF i1-IQ3_XXS 47.2 fast, lower quality
PART 1 PART 2 i1-Q3_K_XS 49.2
PART 1 PART 2 i1-Q3_K_S 52.1 IQ3_XS probably better
PART 1 PART 2 i1-Q3_K_M 58.1 IQ3_S probably better
PART 1 PART 2 i1-Q3_K_L 63.3 IQ3_M probably better
PART 1 PART 2 i1-Q4_K_S 68.6 almost as good as Q4_K_M
PART 1 PART 2 i1-Q4_K_M 72.5 fast, medium quality
PART 1 PART 2 i1-Q5_K_M 85.3 best weighted quant

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9