Edit model card

MaziyarPanahi/Llama-3-8B-Instruct-64k AWQ

Llama-3 DPO Logo

Model Summary

This model has been made based on a great of @winglian with his latest model winglian/Llama-3-8b-64k-PoSE

This model uses PoSE to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. WandB

Quantized GGUF

All GGUF models come with context length of 64000: MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/Llama-3-8B-Instruct-64k-AWQ"
system_message = "You are Llama-3-8B-Instruct-64k, incarnated as a powerful AI. You were created by MaziyarPanahi."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Downloads last month
4
Safetensors
Model size
1.98B params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with solidrust/Llama-3-8B-Instruct-64k-AWQ.
Inference API (serverless) has been turned off for this model.

Quantized from

Dataset used to train solidrust/Llama-3-8B-Instruct-64k-AWQ

Collection including solidrust/Llama-3-8B-Instruct-64k-AWQ