# WizardLM 30B v1.0 GPTQ From: https://huggingface.co/WizardLM/WizardLM-30B-V1.0 --- ## Model * wizardlm-30b-1.0-4bit.safetensors * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with AutoGPTQ * Parameters: Groupsize = None. --act-order.