huggyllama/llama-30b merged with serpdotai/llama-oasst-lora-30B, quantized to 4bit with 128g groupsize. 16bit and non-groupsize versions are on my repo as well.