--- base_model: - meta-llama/Llama-3.2-1B --- # meta-llama/Llama-3.2-1B (Quantized) ## Description This model is a quantized version of the original model `meta-llama/Llama-3.2-1B`. It was quantized using TorchAo. ## Quantization Details - **Quantization Parameters**: `TorchAoConfig("int4_weight_only", group_size=128)` ## Usage You can use this model in your applications by loading it directly from the Hugging Face Hub. In order to run the inference with `Llama-3.2-1B-TORCHAO-W4`, `torch` and`torchao` need to be installed as: ```python pip install torch torchao --upgrade ``` Then, preferably the latest version of transformers need to be installed, as: ```python pip install transformers[accelerate] --upgrade ``` To run the inference the model can be instantiated as any other causal language modeling model via AutoModelForCausalLM and run the inference normally. ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Llama-3.2-1B-TORCHAO-W4")