Text Generation
Transformers
Safetensors
PyTorch
English
mistral
finetuned
quantized
4-bit precision
AWQ
instruct
conversational
Inference Endpoints
text-generation-inference
finetune
chatml
RLHF
gpt4
synthetic data
distillation
Edit model card

A bagel, with everything (except DPO)

bagel

Model Description

This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel

The DPO counterpart is available here: https://huggingface.co/solidrust/bagel-dpo-7b-v0.4-AWQ

The non-DPO version is likely better for roleplay usage.

Compute generously provided by MassedCompute

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/bagel-7b-v0.4-AWQ"
system_message = "You are Bagel, incarnated a powerful AI with everything."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Downloads last month
14
Safetensors
Model size
1.2B params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with solidrust/bagel-7b-v0.4-AWQ.
Inference API (serverless) has been turned off for this model.

Quantized from

Datasets used to train solidrust/bagel-7b-v0.4-AWQ

Collection including solidrust/bagel-7b-v0.4-AWQ