Model Card for Model ID

This is Meta's Llama 2 7B quantized in 4-bit using AutoGPTQ from Hugging Face Transformers.

Model Details

Model Description

Model Sources

The method and code used to quantize the model are explained here: Quantize and Fine-tune LLMs with GPTQ Using Transformers and TRL

Uses

This model is pre-trained and not fine-tuned. You may fine-tune it with PEFT using adapters.

Other quantized versions

Model Card Contact

The Kaitchup

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.