--- base_model: meta-llama/Meta-Llama-3-8B inference: false model_creator: astronomer-io model_name: Meta-Llama-3-8B model_type: llama pipeline_tag: text-generation quantized_by: davidxmle license: other license_name: llama-3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE tags: - llama - llama-3 - facebook - meta - astronomer - gptq - pretrained - quantized - finetuned - autotrain_compatible - endpoints_compatible datasets: - wikitext ---
Astronomer

This model is generously created and made open source by Astronomer.

Astronomer is the de facto company for Apache Airflow, the most trusted open-source framework for data orchestration and MLOps.


# Llama-3-8B-GPTQ-8-Bit - Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama) - Original model: [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) - Built with Meta Llama 3 - Quantized by [David Xue](https://www.linkedin.com/in/david-xue-uva/) from [Astronomer](https://astronomer.io) ## MUST READ: Very Important!! Note About Untrained Special Tokens in Llama 3 Base (Non-instruct) Models & Fine-tuning Llama 3 Base - **If you intend to fine-tune this model with any added tokens, or fine-tune for instruction following, please use the `untrained-special-tokens-fixed` branch/revision.** - Special tokens such as the ones used for instruct are undertrained in Llama 3 base models. - Credits: discovered by Daniel Han https://twitter.com/danielhanchen/status/1781395882925343058 - ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655ad0f8727df37c77a09cb9/1U2rRrx60p1pNeeAZw8Rd.png) ## Important Note About Serving with vLLM & oobabooga/text-generation-webui - For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue. - vLLM does not yet respect `generation_config.json`. - vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180 - For oobabooga/text-generation-webui - Load the model via AutoGPTQ, with `no_inject_fused_attention` enabled. This is a bug with AutoGPTQ library. - Under `Parameters` -> `Generation` -> `Skip special tokens`: turn this off (deselect) - Under `Parameters` -> `Generation` -> `Custom stopping strings`: add `"<|end_of_text|>","<|eot_id|>"` to the field ## Description This repo contains 8 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B). This model can be loaded with just over 10GB of VRAM (compared to the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc). The 8 bit GPTQ quant has minimum quality degradation from the original `bfloat16` model due to its higher bitrate. The `untrained-special-tokens-fixed` branch is the same model as the main branch but has special tokens and tokens untrained (by finding the tokens where max embedding value of each token in input_embeddings and output_embeddings is 0) and setting them to the average of all trained tokens for each feature. Using this branch is recommended if you plan to do any fine-tuning with your own tokens added or with instruction following. ## GPTQ Quantization Method - This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323) - Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss. | Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Special Tokens Fixed | Description | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ------- | ---- | | [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-8-Bit/tree/main) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.74 GB | No | No | 8-bit, with Act Order and group size 32g. Minimum accuracy loss with decent VRAM usage reduction. | | [untrained-special-tokens-fixed](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-8-Bit/tree/untrained-special-tokens-fixed) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.74 GB | No | Yes | Same as the main branch. The special tokens that were untrained causing exploding gradients/NaN gradients have had their embedding values set to the average of trained tokens for each feature | | More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 8 bit models in the future using different parameters such as 128g group size and etc. | ## Serving this GPTQ model using vLLM Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM). Tested with the below command ```bash python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-GPTQ-8-Bit --max-model-len 8192 --dtype float16 ``` For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint ### Contributors - Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)