Text Generation
Transformers
Safetensors
PyTorch
English
mistral
finetuned
quantized
4-bit precision
AWQ
instruct
conversational
Inference Endpoints
text-generation-inference
finetune
chatml
DPO
RLHF
gpt4
synthetic data
distillation
awq
Suparious's picture
Librarian Bot: Update Hugging Face dataset ID (#1)
a213543 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - finetuned
  - quantized
  - 4-bit
  - AWQ
  - transformers
  - pytorch
  - mistral
  - instruct
  - text-generation
  - conversational
  - license:apache-2.0
  - autotrain_compatible
  - endpoints_compatible
  - text-generation-inference
  - region:us
  - finetune
  - chatml
  - DPO
  - RLHF
  - gpt4
  - synthetic data
  - distillation
datasets:
  - allenai/ai2_arc
  - allenai/ultrafeedback_binarized_cleaned
  - argilla/distilabel-intel-orca-dpo-pairs
  - jondurbin/airoboros-3.2
  - codeparrot/apps
  - facebook/belebele
  - bluemoon-fandom-1-1-rp-cleaned
  - boolq
  - camel-ai/biology
  - camel-ai/chemistry
  - camel-ai/math
  - camel-ai/physics
  - jondurbin/contextual-dpo-v0.1
  - jondurbin/gutenberg-dpo-v0.1
  - jondurbin/py-dpo-v0.1
  - jondurbin/truthy-dpo-v0.1
  - LDJnr/Capybara
  - jondurbin/cinematika-v0.1
  - WizardLM/WizardLM_evol_instruct_70k
  - glaiveai/glaive-function-calling-v2
  - jondurbin/gutenberg-dpo-v0.1
  - grimulkan/LimaRP-augmented
  - lmsys/lmsys-chat-1m
  - ParisNeo/lollms_aware_dataset
  - TIGER-Lab/MathInstruct
  - Muennighoff/natural-instructions
  - openbookqa
  - kingbri/PIPPA-shareGPT
  - piqa
  - Vezora/Tested-22k-Python-Alpaca
  - ropes
  - cakiki/rosetta-code
  - Open-Orca/SlimOrca
  - b-mc2/sql-create-context
  - squad_v2
  - mattpscott/airoboros-summarization
  - migtissera/Synthia-v1.3
  - unalignment/toxic-dpo-v0.2
  - WhiteRabbitNeo/WRN-Chapter-1
  - WhiteRabbitNeo/WRN-Chapter-2
  - winogrande
model_name: bagel 7B v0.4 DPO
base_model: mistralai/Mistral-7B-v0.1
quantized_by: Suparious
pipeline_tag: text-generation
model_creator: jondurbin
inference: false
prompt_template: |-
  {bos}<|im_start|>{role}
  {text}
  <|im_end|>{eos} 

A bagel, with everything

bagel

Model Description

This is a fine-tune of mistral-7b-v0.1, which underwent additional fine-tuning using direct preference optimization (DPO).

See bagel for additional details on the datasets.

The non-DPO version is available here, and is likely superior for roleplay.

Compute generously provided by MassedCompute

How to use

Install the necessary packages

pip install --upgrade autoawq autoawq-kernels

Example Python code

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer

model_path = "solidrust/bagel-dpo-7b-v0.4-AWQ"
system_message = "You are Bagel, incarnated a powerful AI with everything."

# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
                                          fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
                                          trust_remote_code=True)
streamer = TextStreamer(tokenizer,
                        skip_prompt=True,
                        skip_special_tokens=True)

# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""

prompt = "You're standing on the surface of the Earth. "\
        "You walk one mile south, one mile west and one mile north. "\
        "You end up exactly where you started. Where are you?"

tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
                  return_tensors='pt').input_ids.cuda()

# Generate output
generation_output = model.generate(tokens,
                                  streamer=streamer,
                                  max_new_tokens=512)

About AWQ

AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.

AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.

It is supported by:

Prompt template: ChatML

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant