ukim4 commited on
Commit
c3ab433
0 Parent(s):

Duplicate from localmodels/LLM

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ duplicated_from: localmodels/LLM
3
+ ---
4
+ # WizardLM 7B v1.0 Uncensored ggml
5
+
6
+ From: https://huggingface.co/ehartford/WizardLM-7B-V1.0-Uncensored
7
+
8
+ ---
9
+
10
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
11
+
12
+ Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.
13
+
14
+ ### k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
15
+
16
+ Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.
17
+
18
+ ---
19
+
20
+ ## Provided files
21
+ | Name | Quant method | Bits | Size | Max RAM required, no GPU offloading | Use case |
22
+ | ---- | ---- | ---- | ---- | ---- | ----- |
23
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
24
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
25
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
26
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
27
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
28
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
29
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
30
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
31
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
32
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
33
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
34
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
35
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
36
+ | wizardlm-7b-v1.0-uncensored.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
37
+
38
+ ---
39
+
40
+ # WizardLM-7B-V1.0-Uncensored Model Card
41
+
42
+ This is a retraining of https://huggingface.co/WizardLM/WizardLM-7B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias.
43
+
44
+ Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0.
45
+
46
+ Shout out to the open source AI/ML community, and everyone who helped me out.
47
+
48
+ Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
49
+
50
+ Unlike WizardLM/WizardLM-7B-V1.0, but like WizardLM/WizardLM-13B-V1.0 and WizardLM/WizardLM-33B-V1.0, this model is trained with Vicuna-1.1 style prompts.
wizardlm-7b-v1.0-uncensored.ggmlv3.q2_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:491a5e535454aae34768a43755697948e1d78b3ddd12e0d967ce86d1cf549e84
3
+ size 2866807424
wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_L.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ec78f4c444a5299733e470df64d7f9bb519b2a3c981a68ea12e93de252ca8a0
3
+ size 3596821120
wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77f0faa4f5a1c08beff0639322e9d9d59ce825bf141d87b7b9d4094adf95d59
3
+ size 3282248320
wizardlm-7b-v1.0-uncensored.ggmlv3.q3_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:213702c203af2bee747afaa24feba18145a1547f15a54b0b08db742aaa7eac59
3
+ size 2948014720
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d97897d97f93bbf9ca5e38dd2fd75e14380b887fbaa313332e6fbe2377907dc2
3
+ size 3791725184
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6207856812c77cbc87c0d4ea05db0edeae96a65687acca4807241fb0aaed822
3
+ size 4212859520
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:29863f6f869f6abbf506809864636f599d3911b802cb5ae9b7e9d3ad759aaa09
3
+ size 4080714368
wizardlm-7b-v1.0-uncensored.ggmlv3.q4_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea8d475daab69e37384fa2380dd1eefef80bc25e94853829e2c397aa4158e7fe
3
+ size 3825517184
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:391c11e1f20ee16c3c417c07e8330b704867598b802c00118427971c48b21713
3
+ size 4633993856
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_1.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c15b81036ca913061e841396f92b2622ea6c23cc959a62cd132d91bc4128f0b3
3
+ size 5055128192
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_K_M.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f6f532a277f479665d57db19bee6ec1b6f4e4ed1f06b8cc2a917ba7a8f57aa5
3
+ size 4782867072
wizardlm-7b-v1.0-uncensored.ggmlv3.q5_K_S.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b00eba18df768427e3c170aa7b341de97918fcf1ea1b28bf740b976d10e18d68
3
+ size 4651401856
wizardlm-7b-v1.0-uncensored.ggmlv3.q6_K.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b19325861a7c5aa8524de61374e2385278d4e95214e1cd865e6d2c23d0d94ae
3
+ size 5528904320
wizardlm-7b-v1.0-uncensored.ggmlv3.q8_0.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:395b9499798d407cae074046340ade48de6562b7b7c0b38e601705e3db733947
3
+ size 7160799872