Suparious's picture
Update README.md
3dd537b verified
metadata
tags:
  - finetuned
  - quantized
  - 4-bit
  - AWQ
  - transformers
  - pytorch
  - mistral
  - text-generation
  - conversational
  - license:apache-2.0
  - autotrain_compatible
  - endpoints_compatible
  - text-generation-inference
  - region:us
base_model: senseable/WestLake-7B-v2
license: apache-2.0
language:
  - en
library_name: transformers
model_creator: Common Sense
model_name: WestLake 7B v2
model_type: mistral
pipeline_tag: text-generation
prompt_template: |
  <|im_start|>system
  {system_message}<|im_end|>
  <|im_start|>user
  {prompt}<|im_end|>
  <|im_start|>assistant
quantized_by: Suparious

WestLake 7B v2 laser - AWQ

It follows the implementation of laserRMT

image/png

Model description

This repo contains AWQ model files for Common Sense's WestLake 7B v2.

These files were quantised using hardware kindly provided by SolidRusT Networks.

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/WestLake-7B-v2-laser-AWQ"
system_message = "Welcome to WestLake. You are here to help users with any questions they may have."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Also working with Basic Mistral format:

<|system|>
</s>
<|user|>
{prompt}</s>
<|assistant|>