Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

exl2 quant of Sao10K/72B-Qwen2.5-Kunou-v1

I noticed nobody uploaded exl2 quants yet so here's my 6.5bpw quant of 72B-Qwen2.5-Kunou-v1

  • mesaurement.json

I'll probably delete this once the big quanters get around to it.

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for gghfez/72B-Qwen2.5-Kunou-v1-exl2-6.5bpw

Base model

Qwen/Qwen2.5-72B
Quantized
(16)
this model

Collection including gghfez/72B-Qwen2.5-Kunou-v1-exl2-6.5bpw