4-bit GPTQ quantized version of FuseChat-Qwen-2.5-7B-Instruct for inference with the Private LLM app.
- Downloads last month
- 0
Inference API (serverless) does not yet support mlc-llm models for this pipeline type.
Model tree for numen-tech/FuseChat-Qwen-2.5-7B-Instruct-GPTQ-Int4
Base model
FuseAI/FuseChat-Qwen-2.5-7B-Instruct