File size: 270 Bytes
8e2a508
 
 
75b1cda
 
 
 
6bec0fd
 
1
2
3
4
5
6
7
8
9
10
---
license: apache-2.0
---

Genearted from https://github.com/yhyu13/AutoGPTQ.git branch cuda_dev

Original weight: https://huggingface.co/tiiuae/falcon-7b

Note this is the quantization of the base model, where base model is not fined-tuned with chat instructions yet