Suparious's picture
Updated and moved existing to merged_models base_model tag in README.md
5b22595 verified
|
raw
history blame
2.19 kB
metadata
base_model: mlabonne/ChimeraLlama-3-8B
inference: false
library_name: transformers
license: other
merged_models:
  - NousResearch/Meta-Llama-3-8B-Instruct
  - mlabonne/OrpoLlama-3-8B
  - Locutusque/Llama-3-Orca-1.0-8B
  - abacusai/Llama-3-Smaug-8B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
  - merge
  - mergekit
  - lazymergekit
  - llama

mlabonne/ChimeraLlama-3-8B AWQ

Model Summary

ChimeraLlama-3-8B outperforms Llama 3 8B Instruct on Nous' benchmark suite.

ChimeraLlama-3-8B is a merge of the following models using LazyMergekit:

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by: