Edit model card

Qwen2-0.5B-Instruct-FP8

Model Overview

  • Model Architecture:

    Based on and identical to the Qwen2-0.5B-Instruct architecture
  • Model Optimizations:

    Weights and activations quantized to FP8
  • Release Date:

    June 14, 2024
  • Model Developers:

    Neural Magic

Qwen2-0.5B-Instruct quantized to FP8 weights and activations using per-tensor quantization through the AutoFP8 repository, ready for inference with vLLM >= 0.5.0. Calibrated with 512 UltraChat samples to achieve 100% performance recovery on the Open LLM Benchmark evaluations. Reduces space on disk by ~30%. Part of the FP8 LLMs for vLLM collection.

Usage and Creation

Produced using AutoFP8 with calibration samples from ultrachat.

from datasets import load_dataset
from transformers import AutoTokenizer

from auto_fp8 import AutoFP8ForCausalLM, BaseQuantizeConfig

pretrained_model_dir = "Qwen/Qwen2-0.5B-Instruct"
quantized_model_dir = "Qwen2-0.5B-Instruct-FP8"

tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True, model_max_length=4096)
tokenizer.pad_token = tokenizer.eos_token

ds = load_dataset("mgoin/ultrachat_2k", split="train_sft").select(range(512))
examples = [tokenizer.apply_chat_template(batch["messages"], tokenize=False) for batch in ds]
examples = tokenizer(examples, padding=True, truncation=True, return_tensors="pt").to("cuda")

quantize_config = BaseQuantizeConfig(quant_method="fp8", activation_scheme="static")

model = AutoFP8ForCausalLM.from_pretrained(
    pretrained_model_dir, quantize_config=quantize_config
)
model.quantize(examples)
model.save_quantized(quantized_model_dir)

Evaluated through vLLM with the following script:

#!/bin/bash

# Example usage:
# CUDA_VISIBLE_DEVICES=0 ./eval_openllm.sh "neuralmagic/Qwen2-0.5B-Instruct-FP8" "tensor_parallel_size=1,max_model_len=4096,add_bos_token=True,gpu_memory_utilization=0.7"

export MODEL_DIR=${1}
export MODEL_ARGS=${2}

declare -A tasks_fewshot=(
    ["arc_challenge"]=25
    ["winogrande"]=5
    ["truthfulqa_mc2"]=0
    ["hellaswag"]=10
    ["mmlu"]=5
    ["gsm8k"]=5
)

declare -A batch_sizes=(
    ["arc_challenge"]="auto"
    ["winogrande"]="auto"
    ["truthfulqa_mc2"]="auto"
    ["hellaswag"]="auto"
    ["mmlu"]=1
    ["gsm8k"]="auto"
)

for TASK in "${!tasks_fewshot[@]}"; do
    NUM_FEWSHOT=${tasks_fewshot[$TASK]}
    BATCH_SIZE=${batch_sizes[$TASK]}
    lm_eval --model vllm \
        --model_args pretrained=$MODEL_DIR,$MODEL_ARGS \
        --tasks ${TASK} \
        --num_fewshot ${NUM_FEWSHOT} \
        --write_out \
        --show_config \
        --device cuda \
        --batch_size ${BATCH_SIZE} \
        --output_path="results/${TASK}"
done

Evaluation

Evaluated on the Open LLM Leaderboard evaluations through vLLM.

Open LLM Leaderboard evaluation scores

Qwen2-0.5B-Instruct Qwen2-0.5B-Instruct-FP8
(this model)
arc-c
25-shot
31.74 32.00
hellaswag
10-shot
49.45 49.21
mmlu
5-shot
43.87 43.63
truthfulqa
0-shot
39.37 39.33
winogrande
5-shot
55.49 56.59
gsm8k
5-shot
37.83 36.85
Average
Accuracy
42.96 42.94
Recovery 100% 99.95%
Downloads last month
29
Safetensors
Model size
494M params
Tensor type
BF16
·
F8_E4M3
·

Collection including neuralmagic/Qwen2-0.5B-Instruct-FP8