--- license: apache-2.0 tags: - 2bit - llama - yi - 34b --- You can run it on 11G mem GPU,quantize base QuIP# method, a weights-only quantization method that is able to achieve near fp16 performance using only 2 bits per weight. url:https://github.com/Cornell-RelaxML/quip-sharp