Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
huihui-ai/QwQ-32B-Coder-Fusion-7030
Overview
QwQ-32B-Coder-Fusion-78030
is a mixed model that combines the strengths of two powerful Qwen-based models: huihui-ai/QwQ-32B-Preview-abliterated and
huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.
The weights are blended in a 7:3 ratio, with 70% of the weights from QwQ-32B-Preview-abliterated and 30% from the abliterated Qwen2.5-Coder-32B-Instruct-abliterated model.
Although it's a simple mix, the model is usable, and no gibberish has appeared.
This is an experiment. I test the 9:1,
8:2,
and 7:3 ratios separately to see how much impact they have on the model.
Please refer to the mixed source code.
Model Details
- Base Models:
- Model Size: 32B parameters
- Architecture: Qwen 2.5
- Mixing Ratio: 7:3 (QwQ-32B-Preview-abliterated:Qwen2.5-Coder-32B-Instruct-abliterated)
ollama
You can use huihui_ai/qwq-fusion:32b-7030 directly,
ollama run huihui_ai/qwq-fusion:32b-7030
Other proportions can be obtained by visiting huihui_ai/qwq-fusion.
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for matatonic/QwQ-32B-Coder-Fusion-7030-4.25bpw-exl2
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-32B-Instruct
Finetuned
Qwen/QwQ-32B-Preview
Finetuned
huihui-ai/QwQ-32B-Preview-abliterated