--- license: apache-2.0 --- Genearted from https://github.com/yhyu13/AutoGPTQ.git branch cuda_dev Original weight: https://huggingface.co/tiiuae/falcon-7b Note this is the quantization of the base model, where base model is not fined-tuned with chat instructions yet