This model is generously created and made open source by Astronomer.
Astronomer is the de facto company for Apache Airflow, the most trusted open-source framework for data orchestration and MLOps.
Llama-3-8B-GPTQ-8-Bit
- Original Model creator: Meta Llama from Meta
- Original model: meta-llama/Meta-Llama-3-8B
- Built with Meta Llama 3
- Quantized by David Xue from Astronomer
MUST READ: Very Important!! Note About Untrained Special Tokens in Llama 3 Base (Non-instruct) Models & Fine-tuning Llama 3 Base
- If you intend to fine-tune this model with any added tokens, or fine-tune for instruction following, please use the untrained-special-tokens-fixed branch/revision.
- Special tokens such as the ones used for instruct are undertrained in Llama 3 base models.
- Credits: discovered by Daniel Han https://twitter.com/danielhanchen/status/1781395882925343058
Important Note About Serving with vLLM & oobabooga/text-generation-webui
- For loading this model onto vLLM, make sure all requests have
"stop_token_ids":[128001, 128009]
to temporarily address the non-stop generation issue.- vLLM does not yet respect
generation_config.json
. - vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180
- vLLM does not yet respect
- For oobabooga/text-generation-webui
- Load the model via AutoGPTQ, with
no_inject_fused_attention
enabled. This is a bug with AutoGPTQ library. - Under
Parameters
->Generation
->Skip special tokens
: turn this off (deselect) - Under
Parameters
->Generation
->Custom stopping strings
: add"<|end_of_text|>","<|eot_id|>"
to the field
- Load the model via AutoGPTQ, with
Description
This repo contains 4 Bit quantized GPTQ model files for meta-llama/Meta-Llama-3-8B.
This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).
The 4 bit GPTQ quant has small quality degradation from the original bfloat16
model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
The untrained-special-tokens-fixed
branch is the same model as the main branch but has special tokens and tokens untrained (by finding the tokens where max embedding value of each token in input_embeddings and output_embeddings is 0) and setting them to the average of all trained tokens for each feature. Using this branch is recommended if you plan to do any fine-tuning with your own tokens added or with instruction following.
GPTQ Quantization Method
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by GPTQ paper
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Special Tokens Fixed | Description |
---|---|---|---|---|---|---|---|---|---|---|
main | 4 | 128 | Yes | 0.1 | wikitext | 8192 | 5.74 GB | Yes | No | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
untrained-special-tokens-fixed | 4 | 128 | Yes | 0.1 | wikitext | 8192 | 5.74 GB | Yes | Yes | Same as the main branch. The special tokens that were untrained causing exploding gradients/NaN gradients have had their embedding values set to the average of trained tokens for each feature |
More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
Serving this GPTQ model using vLLM
Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).
Tested with the below command
python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-GPTQ-4-Bit --max-model-len 8192 --dtype float16
For the non-stop token generation bug, make sure to send requests with stop_token_ids":[128001, 128009]
to vLLM endpoint
Contributors
- Quantized by David Xue, Machine Learning Engineer from Astronomer
- Downloads last month
- 49
Model tree for astronomer/Llama-3-8B-GPTQ-4-Bit
Base model
meta-llama/Meta-Llama-3-8B