Edit model card

GPTQ quantized falcon-rw-1b

Branch Bits GS Act Order Damp % GPTQ Dataset Seq Len Size ExLlama Desc
main 4 None No 0.01 c4 4096 -- No 4-bit, without Act Order and no grouop size.
Downloads last month
8
Safetensors
Model size
1.08B params
Tensor type
I32
·
FP16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.