New discussion

Add F16 and BF16 quantization

#129 opened 1 day ago by andito

Add Llama 3.1 license

#121 opened about 2 months ago by jxtngx

Phi-3.5-MoE-instruct

6
#117 opened 2 months ago by goodasdgood

Arm optimized quants

1
#113 opened 2 months ago by SaisExperiments

Please support this method:

7
#96 opened 4 months ago by ZeroWw

Support Q2 imatrix quants

#95 opened 4 months ago by Dampfinchen

Maybe impose a max model size?

3
#33 opened 7 months ago by pcuenq