Panchovix commited on
Commit
ca5ff77
1 Parent(s): c47c96e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -1,3 +1,8 @@
1
  ---
2
  license: other
3
  ---
 
 
 
 
 
 
1
  ---
2
  license: other
3
  ---
4
+ [WizardLM-33B-V1.0-Uncensored](https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored) merged with kaiokendev's [33b SuperHOT 8k LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test), quantized at 4 bit.
5
+
6
+ It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.
7
+
8
+ I HIGHLY suggest to use exllama, to evade some VRAM issues.