--- base_model: - meta-llama/Llama-3.1-70B --- # meta-llama/Llama-3.1-70B (Quantized) ## Description This model is a quantized version of the original model `meta-llama/Llama-3.1-70B`. It has been quantized using int8_weight_only quantization with torchao. ## Quantization Details - **Quantization Type**: int8_weight_only - **Group Size**: None ## Usage You can use this model in your applications by loading it directly from the Hugging Face Hub: ```python from transformers import AutoModel model = AutoModel.from_pretrained("meta-llama/Llama-3.1-70B")