Edit model card

Qwen2-72B-Instruct-FP8

Model Overview

Qwen2-72B-Instruct quantized to FP8 weights and activations using per-tensor quantization, ready for inference with vLLM >= 0.5.0.

Usage and Creation

Produced using AutoFP8 with calibration samples from ultrachat.

from datasets import load_dataset
from transformers import AutoTokenizer

from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig

pretrained_model_dir = "Qwen/Qwen2-72B-Instruct"
quantized_model_dir = "Qwen2-72B-Instruct-FP8"

tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
tokenizer.pad_token = tokenizer.eos_token

ds = load_dataset("mgoin/ultrachat_2k", split="train_sft").select(range(512))
examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds]
examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt").to("cuda")

quantize_config = BaseQuantizeConfig(quant_method="fp8", activation_scheme="static")

model = AutoFP8ForCausalLM.from_pretrained(
    pretrained_model_dir, quantize_config=quantize_config
)
model.quantize(examples)
model.save_quantized(quantized_model_dir)

Evaluation

Open LLM Leaderboard evaluation scores

Qwen2-72B-Instruct Qwen2-72B-Instruct-FP8
(this model)
arc-c
25-shot
71.58 72.09
hellaswag
10-shot
86.94 86.83
mmlu
5-shot
83.97 84.06
truthfulqa
0-shot
66.98 66.95
winogrande
5-shot
82.79 83.18
gsm8k
5-shot
87.56 88.93
Average
Accuracy
79.97 80.34
Recovery 100% 100.46%
Downloads last month
47
Safetensors
Model size
72.7B params
Tensor type
BF16
·
F8_E4M3
·

Collection including neuralmagic/Qwen2-72B-Instruct-FP8