CorticalStack's picture
Upload folder using huggingface_hub
d9d28ba verified
|
raw
history blame
1.34 kB
metadata
license: apache-2.0

CorticalStack/pastiche-crown-clown-7b-dare-dpo-awq

CorticalStack/pastiche-crown-clown-7b-dare-dpo-awq is an AWQ quantised version of CorticalStack/pastiche-crown-clown-7b-dare-dpo.

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

AWQ configuration

  • Zero point: True
  • Q group size: 128
  • W bit: 4
  • Version: GEMM