base_model: meta-llama/Meta-Llama-3-8B-Instruct
inference: false
model_creator: astronomer-io
model_name: Meta-Llama-3-8B-Instruct
model_type: llama
pipeline_tag: text-generation
prompt_template: >-
{% set loop_messages = messages %}{% for message in loop_messages %}{% set
content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>
'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set
content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if
add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>
' }}{% endif %}
quantized_by: davidxmle
license: other
license_name: llama-3-community-license
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/blob/main/LICENSE
tags:
- llama
- llama-3
- facebook
- meta
- astronomer
- gptq
- pretrained
- quantized
- finetuned
- autotrain_compatible
- endpoints_compatible
datasets:
- wikitext
This model is generously created and made open source by Astronomer.
Astronomer is the de facto company for Apache Airflow, the most trusted open-source framework for data orchestration and MLOps.
Llama-3-8B-Instruct-GPTQ-4-Bit
- Original Model creator: Meta Llama from Meta
- Original model: meta-llama/Meta-Llama-3-8B-Instruct
- Built with Meta Llama 3
- Quantized by Astronomer
Description
This repo contains 4 Bit quantized GPTQ model files for meta-llama/Meta-Llama-3-8B-Instruct.
This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).
The 4 bit GPTQ quant has small quality degradation from the original bfloat16
model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
GPTQ Quantization Method
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by GPTQ paper
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
---|---|---|---|---|---|---|---|---|---|
main | 4 | 128 | Yes | 0.1 | wikitext | 8192 | 9.09 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest Model possible with tiny accuracy loss |
More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
Serving this GPTQ model using vLLM
Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).
Tested with the below command
python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit --max-model-len 8192 --dtype float16
For the non-stop token generation bug, make sure to send requests with stop_token_ids":[128001, 128009]
to vLLM endpoint
Example:
{
"model": "Llama-3-8B-Instruct-GPTQ-4-Bit",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who created Llama 3?"}
],
"max_tokens": 2000,
"stop_token_ids":[128001,128009]
}
Contributors
- Quantized by David Xue, Machine Learning Engineer from Astronomer