π₯ Quantized Model: Arcee-Blitz_gptq_g32_4bit π₯
This is a 4-bit quantized version of arcee-ai/Arcee-Blitz model, quantized by ConfidentialMind.com π€β¨
It leverages the open-source GPTQModel quantization to achieve 4-bit precision with a group size of 128 resulting in a
smaller,
faster model with minimal performance degradation.
Ran on a single NVIDIA A100 GPU with 80GB of VRAM.
Note batch_size
is set quite high as the model is small, you may need to adjust this to your GPU VRAM.
Model Details
- Original Model: arcee-ai/Arcee-Blitz
- Quantized Model: Arcee-Blitz_gptq_g32_4bit (this repository)
- Quantization Method: GPTQ (4-bit, group size 128)
- Quantization Library: GPTQModel
- Calibration Dataset: neuralmagic/LLM_compression_calibration (using 1024 samples with seq len 4096)
- Quantized by: ConfidentialMind.com
Usage
from gptqmodel import GPTQModel
from transformers import AutoTokenizer
# Use the local directory or JustJaro/Arcee-Blitz_gptq_g32_4bit after upload
quantized_model_id = "/home/jaro/models/quantized/Arcee-Blitz_gptq_g32_4bit" # or "JustJaro/Arcee-Blitz_gptq_g32_4bit"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_id)
model = GPTQModel.load(quantized_model_id, device="cuda:0") # or "cpu"
input_text = "This is a test prompt"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda:0")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Package Versions and Installation Instructions
See pyproject.toml for the exact UV project file. See the GPTQModel repo for more details. on how to install the package.
Use the provided pyproject.toml:
uv venv
source venv/bin/activate
uv sync
Environment Variables
HF_TOKEN=<YOUR_HF_TOKEN>
TOKENIZERS_PARALLELISM="true"
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
Quantization Script
Below is the exact quantize.py script used to generate this model (with the exact versions of the dependencies):
#!/usr/bin/env python3
"""
This script loads a source Hugging Face model and a calibration dataset,
quantizes the model using GPTQModel (with 4-bit precision and group size 128),
saves the quantized model using the Transformers API with safetensors (safe serialization)
under ~/models/quantized/, and then creates/updates a Hugging Face repository (with the
_gptq_g128_4bit suffix) by uploading the model, tokenizer, and an auto-generated README.md.
Usage example:
python quantize.py --source-model TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
--calibration-dataset wikitext/wikitext-2-raw-v1 \
--seq-len 1024 --nsamples 256 --hf-token <YOUR_HF_TOKEN>
"""
import os
import shutil
import subprocess
from enum import Enum
from pathlib import Path
from typing import List
import torch
import typer
from datasets import load_dataset
from dotenv import load_dotenv, find_dotenv
from gptqmodel import GPTQModel, QuantizeConfig
from gptqmodel.utils import Perplexity
# For later pushing to the model hub
from huggingface_hub import HfApi
from transformers import AutoTokenizer, PreTrainedTokenizerBase
load_dotenv(find_dotenv())
HF_TOKEN = os.getenv("HF_TOKEN")
app = typer.Typer()
class GroupSize(str, Enum):
accurate:int = 32
balanced:int = 64
fast:int = 128
def get_text_from_example(example: dict) -> str:
"""
Returns text from a dataset example.
If the example contains a "text" field, and it is nonempty, that text is used.
Otherwise, if it has a "messages" field (a list of dicts with a "content" key),
the function returns the concatenation of all non-empty message contents.
"""
if "text" in example and example["text"]:
return example["text"]
elif "messages" in example:
contents = [msg.get("content", "").strip() for msg in example["messages"]]
return " ".join([s for s in contents if s])
else:
return ""
def get_calibration_dataset(
tokenizer: PreTrainedTokenizerBase,
nsamples: int,
seqlen: int,
calibration_dataset: str
) -> List[dict]:
"""
Loads a calibration dataset from the Hugging Face Hub (or from a local file).
It accepts datasets with a single "text" field (like wikitext)
or with a "messages" field (as in the Neural Magic LLM Compression Calibration dataset).
Only examples whose extracted text length is at least 'seqlen' are kept.
Each chosen example is tokenized (with truncation up to 'seqlen') and returned as a dict.
"""
ds = None
try:
# Attempt to load from HF Hub.
try:
if "/" in calibration_dataset:
parts = calibration_dataset.split("/", 1)
ds = load_dataset(parts[0], parts[1], split="train")
else:
ds = load_dataset(calibration_dataset, split="train")
except Exception as e:
print(f"Error loading dataset '{calibration_dataset}' via load_dataset: {e}")
ds = load_dataset(calibration_dataset, split="train")
print(f"Loaded calibration dataset from full remote path {calibration_dataset}.")
except Exception as e:
print(f"Error loading dataset '{calibration_dataset}' via load_dataset: {e}")
# Fallback: if the supplied calibration_dataset is a local path, try to load it as JSON-lines.
if os.path.exists(calibration_dataset):
try:
ds = load_dataset("json", data_files=calibration_dataset, split="train")
print(f"Loaded calibration dataset from local file {calibration_dataset}.")
except Exception as e2:
print(f"Error loading local json dataset from '{calibration_dataset}': {e2}")
return []
else:
return []
print(f"Dataset features: {ds.features}")
# Filter examples that have at least 80% 'seqlen' of extracted text (wikitext-2-raw-v1 dataset has short examples).
ds = ds.filter(lambda x: len(get_text_from_example(x)) <= int(seqlen*0.8))
sample_range = min(nsamples, len(ds))
calibration_data = []
for i in range(sample_range):
example = ds[i]
text = get_text_from_example(example)
tokenized = tokenizer(text, truncation=True, max_length=seqlen, return_tensors="pt")
tokenized = {k: v.squeeze(0) for k, v in tokenized.items()}
calibration_data.append(tokenized)
return calibration_data
def calculate_avg_ppl(model, tokenizer):
"""
Computes the average perplexity on the wikitext-2-raw-v1 train split using GPTQModel's Perplexity utility.
"""
ppl = Perplexity(
model=model,
tokenizer=tokenizer,
dataset_path="wikitext",
dataset_name="wikitext-2-raw-v1",
split="train",
text_column="text",
)
ppl_values = ppl.calculate(n_ctx=512, n_batch=512)
avg = sum(ppl_values) / len(ppl_values)
return avg
def get_pinned_package_versions():
"""
Retrieves pinned package versions using 'uv pip freeze'.
Returns a dictionary mapping lowercased package names to their versions.
"""
try:
result = subprocess.run(["uv", "pip", "freeze"], capture_output=True, text=True, check=True)
packages_output = result.stdout.strip()
versions = {}
for line in packages_output.splitlines():
if "==" in line:
package_name, package_version = line.split("==", 1)
versions[package_name.lower()] = package_version
return versions
except subprocess.CalledProcessError as e:
typer.echo(f"Error running 'uv pip freeze': {e}", err=True)
return {}
except FileNotFoundError:
typer.echo("uv command not found. Make sure uv is installed and in your PATH.", err=True)
return {}
@app.command()
def main(
seq_len: int = typer.Option(4096, help="Sequence length for tokenization and calibration."),
nsamples: int = typer.Option(512, help="Number of samples to use for calibration."),
source_model: str = typer.Option("rombodawg/Rombos-LLM-V2.6-Qwen-14b",
help="Source model HF repository identifier."),
calibration_dataset: str = typer.Option("wikitext/wikitext-2-raw-v1",
help="Calibration dataset identifier (in 'dataset/config' format) or local file path."),
hf_token: str = typer.Option(HF_TOKEN,
help="Hugging Face token for creating/updating your repo."),
upload_only: bool = typer.Option(False, help="Only upload the quantized model to the Hugging Face Hub."),
# Allow for 32, 64, 128 only using typer:
group_size: GroupSize = typer.Option(GroupSize.accurate, help="Group size for quantization accurate: 32, "
"balanced: 64, fast: 128. Default: accurate."),
mse: bool = typer.Option(True, help="Use mse instead of mae for the loss function."),
size_multi: int = typer.Option(3.5, help="Model size multiplier depends on the source model. Default: 1."),
):
# Prepare destination directory and model names.
model_name = source_model.split("/")[-1]
if not size_multi == 1:
size_multiplier = size_multi
size_multiplier_len = size_multiplier / 2
else:
size_multiplier = 1
size_multiplier_len = 1
nsamples = int(nsamples * size_multiplier)
seq_len = int(seq_len * size_multiplier_len)
quantized_model_name = f"{model_name}_gptq_g{int(group_size.value)}_4bit"
quantized_model_dir = os.path.expanduser(os.path.join("~/models/quantized", quantized_model_name))
if not upload_only:
# Remove the directory if it already exists
if os.path.exists(quantized_model_dir):
shutil.rmtree(quantized_model_dir)
# Create directory for quantized model.
os.makedirs(quantized_model_dir, exist_ok=True)
typer.echo("Loading tokenizer from source model...")
tokenizer_obj = AutoTokenizer.from_pretrained(source_model, use_fast=True)
typer.echo("Loading calibration dataset...")
typer.echo(f"Calibration dataset: {calibration_dataset}")
calibration_data = get_calibration_dataset(tokenizer_obj, nsamples, seq_len, calibration_dataset)
if not calibration_data:
typer.echo("Calibration dataset is empty. Aborting.", err=True)
raise typer.Exit(code=1)
if mse:
# Fits mistral-small-24b particularly well, as well as the increased damp_percent
mse = 0.01
quantize_config = QuantizeConfig(bits=4, group_size=int(group_size.value), damp_percent=0.015, mse=mse)
else:
quantize_config = QuantizeConfig(bits=4, group_size=int(group_size.value), damp_percent=0.01)
device = "cuda:0" if torch.cuda.is_available() else "cpu"
typer.echo(f"Loading model in {device} mode...")
model = GPTQModel.load(source_model, quantize_config)
typer.echo("Quantizing model...")
group_size_factor = int(128 / int(group_size.value))
model.quantize(calibration_data, auto_gc=False,
batch_size=max(1, int(int((nsamples * 0.1) / group_size_factor) *
int(size_multiplier_len))))
# Retrieve Hugging Face user info for README generation.
package_versions = get_pinned_package_versions()
username = get_my_user(hf_token)
script_content = self_read_script()
typer.echo(f"Saving quantized model to {quantized_model_dir} using Transformers safe serialization...")
try:
model.save_pretrained(quantized_model_dir)
tokenizer_obj.save_pretrained(quantized_model_dir)
except Exception as ex:
typer.echo(f"Error during saving with safe_serialization: {ex}. Aborting.")
raise
typer.echo(f"Model uploaded to Hugging Face repo: {quantized_model_name}")
else:
tokenizer_obj = AutoTokenizer.from_pretrained(source_model, use_fast=True)
package_versions = get_pinned_package_versions()
username = get_my_user(hf_token)
script_content = self_read_script()
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = GPTQModel.load(quantized_model_dir, device=device)
avg_ppl = calculate_avg_ppl(model, tokenizer_obj)
typer.echo(f"Average perplexity (PPL) on wikitext v2 dataset: {avg_ppl}")
deps = Path("./pyproject.toml")
shutil.copy(deps, quantized_model_dir)
generate_readme(calibration_dataset, nsamples, quantized_model_dir,
quantized_model_name, script_content, seq_len, source_model, username, avg_ppl)
GPTQModel.push_to_hub(quantized_path=quantized_model_dir, private=False, repo_id=quantized_model_name,
token=HF_TOKEN)
typer.echo(f"Model uploaded to Hugging Face repo: {quantized_model_name}")
demo_input = tokenizer_obj("test is", return_tensors="pt").to(device)
generated_ids = model.generate(**demo_input)
output_text = tokenizer_obj.decode(generated_ids[0])
typer.echo(f"Inference demo output: {output_text}")
typer.echo(f"Average perplexity (PPL) on calibration dataset: {avg_ppl}")
def self_read_script():
try:
script_path = os.path.abspath(__file__)
with open(script_path, "r") as f:
script_content = f.read()
except Exception as e:
script_content = "Error reading script content: " + str(e)
return script_content
def get_my_user(hf_token):
api = HfApi(token=hf_token)
user_info = api.whoami()
try:
username = user_info.get("name") or user_info.get("username")
except Exception as e:
typer.echo(f"Error retrieving username from Hugging Face API: {e}. Using default username.")
username = api.whoami()
if not username:
typer.echo("Could not determine your Hugging Face username from the token, defaulting to hard coded username.",
err=True)
username = "JustJaro"
return username
def generate_readme(calibration_dataset, nsamples, quantized_model_dir,
quantized_model_name, script_content, seq_len, source_model, username, avg_ppl):
readme_content = f"""---
tags:
- gptq
- quantization
- 4bit
- confidentialmind
- text-generation
- apache2.0
- mistral-small-24b
---
# π₯ Quantized Model: {quantized_model_name} π₯
This is a 4-bit quantized version of [{source_model}](https://huggingface.co/{source_model}) model, quantized by [ConfidentialMind.com](https://www.confidentialmind.com) π€β¨
It leverages the open-source GPTQModel quantization to achieve 4-bit precision with a group size of 128 resulting in a
smaller,
faster model with minimal performance degradation.
Ran on a single NVIDIA A100 GPU with 80GB of VRAM.
*Note* `batch_size` is set quite high as the model is small, you may need to adjust this to your GPU VRAM.
## Model Details
- **Original Model:** [{source_model}](https://huggingface.co/{source_model})
- **Quantized Model:** {quantized_model_name} (this repository)
- **Quantization Method:** GPTQ (4-bit, group size 128)
- **Quantization Library:** [GPTQModel](https://github.com/ModelCloud/GPTQModel/tree/main)
- **Calibration Dataset:** {calibration_dataset} (using {nsamples} samples with seq len {seq_len})
- **Quantized by:** [ConfidentialMind.com](https://www.confidentialmind.com)
## Usage
```python
from gptqmodel import GPTQModel
from transformers import AutoTokenizer
# Use the local directory or {username}/{quantized_model_name} after upload
quantized_model_id = "{quantized_model_dir}" # or "{username}/{quantized_model_name}"
tokenizer = AutoTokenizer.from_pretrained(quantized_model_id)
model = GPTQModel.load(quantized_model_id, device="cuda:0") # or "cpu"
input_text = "This is a test prompt"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda:0")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Package Versions and Installation Instructions
See pyproject.toml for the exact UV project file. See the GPTQModel repo for more details. on how to install the package.
Use the provided pyproject.toml:
uv venv
source venv/bin/activate
uv sync
Environment Variables
HF_TOKEN=<YOUR_HF_TOKEN>
TOKENIZERS_PARALLELISM="true"
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
Quantization Script
Below is the exact quantize.py script used to generate this model (with the exact versions of the dependencies):
{script_content}
Quantization Performance
Average perplexity (PPL) on wikitext v2 dataset: {avg_ppl}
Disclaimer
This model is for research purposes only. It may inherit limitations and biases from the original model and the quantization process. Please use responsibly and refer to the original model card for more details.
Contact
For any questions or support, please visit ConfidentialMind.com or contact us directly.
License
This model inherits the license from the original model. Please refer to the original model card for more details.
Original model card: {source_model}
Author
This model was quantized by Jaro
Acknowledgements
Quantization performed using the GPTQModel pipeline.
TODO: Add gptqmodel.utils.eval
integration and auto-generation of eval table.
Generated and quantized using GPTQModel. """ readme_path = os.path.join(quantized_model_dir, "README.md") with open(readme_path, "w") as f: f.write(readme_content) typer.echo("README.md created with detailed information.")
if name == "main": app()
## Quantization Performance
Average perplexity (PPL) on wikitext v2 dataset: 28.71075403372412
## Disclaimer
This model is for research purposes only. It may inherit limitations and biases from the original model and the quantization process. Please use responsibly and refer to the original model card for more details.
## Contact
For any questions or support, please visit [ConfidentialMind.com](https://www.confidentialmind.com) or contact us directly.
## License
This model inherits the license from the original model. Please refer to the original model card for more details.
Original model card: `arcee-ai/Arcee-Blitz`
## Author
This model was quantized by [Jaro](https://www.linkedin.com/in/jaroai/)
## Acknowledgements
Quantization performed using the GPTQModel pipeline.
TODO: Add `gptqmodel.utils.eval` integration and auto-generation of eval table.
---
*Generated and quantized using GPTQModel.*
- Downloads last month
- 0