Attention quantization: HQQ 4-bit, groupsize 64, compress zero, compress scale with groupsize 256
Experts quantization: HQQ 3-bit, groupsize 64, compress zero, compress scale with groupsize 128

Downloads last month
14
Safetensors
Model size
6.91B params
Tensor type
FP16
U8
I32
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.