WizardLM 7B v1.0 Uncensored ggml

From: https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored


Original llama.cpp quant methods: q4_0, q4_1, q5_0, q5_1, q8_0

Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.

k-quant methods: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K

Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.


Provided files

Name Quant method Bits Size Max RAM required, no GPU offloading Use case
wizardlm-7b-v1.0-uncensored.ggmlv3.q2_K.bin q2_K 2 2.87 GB 5.37 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_L.bin q3_K_L 3 3.60 GB 6.10 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_M.bin q3_K_M 3 3.28 GB 5.78 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_S.bin q3_K_S 3 2.95 GB 5.45 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin q4_0 4 3.79 GB 6.29 GB Original llama.cpp quant method, 4-bit.
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_1.bin q4_1 4 4.21 GB 6.71 GB Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_K_M.bin q4_K_M 4 4.08 GB 6.58 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_K_S.bin q4_K_S 4 3.83 GB 6.33 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_0.bin q5_0 5 4.63 GB 7.13 GB Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_1.bin q5_1 5 5.06 GB 7.56 GB Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference.
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_K_M.bin q5_K_M 5 4.78 GB 7.28 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_K_S.bin q5_K_S 5 4.65 GB 7.15 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
wizardlm-7b-v1.0-uncensored.ggmlv3.q6_K.bin q6_K 6 5.53 GB 8.03 GB New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors
wizardlm-7b-v1.0-uncensored.ggmlv3.q8_0.bin q8_0 8 7.16 GB 9.66 GB Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.

WizardLM-7B-V1.0-Uncensored Model Card

This is a retraining of https://huggingface.co/WizardLM/WizardLM-7B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.

Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0.

Shout out to the open source AI/ML community, and everyone who helped me out.

Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.

Unlike WizardLM/WizardLM-7B-V1.0, but like WizardLM/WizardLM-13B-V1.0 and WizardLM/WizardLM-33B-V1.0, this model is trained with Vicuna-1.1 style prompts.

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .