Edit model card

Dracarys2-72B-Instruct

Introduction

We introduce the latest in the Smaug series, the Dracarys family of finetunes targeting coding performance improvements across a variety of base models.

This variant is a finetune of Qwen2.5-72B-Instruct

Compared to Qwen2.5-72B-Instruct, Dracarys has better LiveCodeBench scores (see evaluation results below).

Model Description

How to use

The prompt format is unchanged from Qwen2.5-72B-Instruct (see evaluations for prompt details for LCB)

Use with transformers

See the snippet below for usage with Transformers:

import transformers
import torch

model_id = "abacusai/Dracarys2-72B-Instruct"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "You are data science coding assistant that generates Python code using Pandas and Numpy."},
    {"role": "user", "content": "Write code to select rows from the dataframe `df` having the maximum `temp` for each `city`"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])

Evaluation Results

LiveCodeBench

Model Code Generation Code Execution (COT) Test Output Prediction
Dracarys2-72B-Instruct 53.80 89.12 59.61
Qwen2.5-72B-Instruct 53.03 88.72 46.28

Breakdown of LiveCodeBench CodeGeneration

Model Easy Medium Hard
Dracarys2-72B-Instruct 88.79 50.28 9.47
Qwen2.5-72B-Instruct 86.99 49.59 9.99

Breakdown of LiveCodeBench TestOutputPrediction

Model Easy Medium Hard
Dracarys2-72B-Instruct 79.25 53.76 37.63
Qwen2.5-72B-Instruct 68.43 39.46 22.22
Downloads last month
109
GGUF
Model size
72.7B params
Architecture
qwen2

2-bit

3-bit

4-bit

Inference Examples
Unable to determine this model's library. Check the docs .