--- license: other tags: - merge - mergekit - lazymergekit base_model: - NousResearch/Meta-Llama-3-8B-Instruct - NousResearch/Meta-Llama-3-8B-Instruct - NousResearch/Meta-Llama-3-8B-Instruct - NousResearch/Meta-Llama-3-8B-Instruct - NousResearch/Meta-Llama-3-8B-Instruct --- **Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |
**[2.2](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-2_2bpw_exl2)**
|
4176 MB
|
6
| |
**[2.5](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-2_5bpw_exl2)**
|
4519 MB
|
6
| |
**[3.0](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-3_0bpw_exl2)**
|
5143 MB
|
6
| |
**[3.5](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-3_5bpw_exl2)**
|
5766 MB
|
6
| |
**[3.75](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-3_75bpw_exl2)**
|
6077 MB
|
6
| |
**[4.0](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-4_0bpw_exl2)**
|
6391 MB
|
6
| |
**[4.25](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-4_25bpw_exl2)**
|
6703 MB
|
6
| |
**[5.0](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-5_0bpw_exl2)**
|
7637 MB
|
6
| |
**[6.0](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-6_0bpw_exl2)**
|
8992 MB
|
8
| |
**[6.5](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-6_5bpw_exl2)**
|
9616 MB
|
8
| |
**[8.0](https://huggingface.co/Zoyd/mlabonne_Meta-Llama-3-12B-Instruct-8_0bpw_exl2)**
|
11473 MB
|
8
| # Meta-Llama-3-12B-Instruct Meta-Llama-3-12B-Instruct is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) * [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) ## 🏆 Evaluation | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |--------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[Meta-Llama-3-12B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-12B-Instruct)| 41.7| 67.71| 52.75| 40.58| 50.69| |[Meta-Llama-3-12B](https://huggingface.co/mlabonne/Meta-Llama-3-12B)| 29.46| 68.01| 41.02| 35.57| 43.52| ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [0,9] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [5,14] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [10,19] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [15,24] - sources: - model: NousResearch/Meta-Llama-3-8B-Instruct layer_range: [20,32] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Meta-Llama-3-12B-Instruct" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```