Edit model card

Panda 7B v0.1 AWQ

Neural-Story

Model Details

The Panda-7B-v0.1 model by NeuralNovel.

Fine-tuned with the intention to generate instructive and narrative text, with a specific focus on combining the elements of versatility, character engagement and nuanced writing capability.

This fine-tune has been designed to provide detailed, creative and logical responses in the context of diverse narratives. Optimised for creative writing, roleplay and logical problem solving.

Full-parameter fine-tune (FFT) of Mistral-7B-Instruct-v0.2. Apache-2.0 license, suitable for commercial or non-commercial use.

Sincere appreciation to Techmind for their generous sponsorship.

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/Panda-7B-v0.1-DPO-AWQ"
system_message = "You are Panda, incarnated as a powerful AI."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Downloads last month
5
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with solidrust/Panda-7B-v0.1-AWQ.
Inference API (serverless) has been turned off for this model.

Quantized from

Datasets used to train solidrust/Panda-7B-v0.1-AWQ

Collection including solidrust/Panda-7B-v0.1-AWQ