metadata
base_model: mlabonne/ChimeraLlama-3-8B
inference: false
library_name: transformers
license: other
merged_models:
- NousResearch/Meta-Llama-3-8B-Instruct
- mlabonne/OrpoLlama-3-8B
- Locutusque/Llama-3-Orca-1.0-8B
- abacusai/Llama-3-Smaug-8B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- merge
- mergekit
- lazymergekit
- llama
mlabonne/ChimeraLlama-3-8B AWQ
- Model creator: mlabonne
- Original model: ChimeraLlama-3-8B
Model Summary
ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite.
ChimeraLlama-3-8B is a merge of the following models using LazyMergekit:
- NousResearch/Meta-Llama-3-8B-Instruct
- mlabonne/OrpoLlama-3-8B
- Locutusque/Llama-3-Orca-1.0-8B
- abacusai/Llama-3-Smaug-8B
About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- Text Generation Webui - using Loader: AutoAWQ
- vLLM - version 0.2.2 or later for support for all model types.
- Hugging Face Text Generation Inference (TGI)
- Transformers version 4.35.0 and later, from any code or client that supports Transformers
- AutoAWQ - for use from Python code