Suparious's picture
Update README.md
2fba67d verified
|
raw
history blame
2.15 kB
metadata
license: other
library_name: transformers
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
  - merge
  - mergekit
  - lazymergekit
  - llama
base_model:
  - NousResearch/Meta-Llama-3-8B-Instruct
  - mlabonne/OrpoLlama-3-8B
  - Locutusque/Llama-3-Orca-1.0-8B
  - abacusai/Llama-3-Smaug-8B
pipeline_tag: text-generation
inference: false
quantized_by: Suparious

mlabonne/ChimeraLlama-3-8B AWQ

Model Summary

ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite.

ChimeraLlama-3-8B is a merge of the following models using LazyMergekit:

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by: