Edit model card

CleverQwen2-1.5B-GGUF

The repo contains GGUF quants for CleverQwen2-1.5B.

This is a merge of pre-trained language models created using mergekit.

It has grown by about 300M parameters and I don't know why. I would like to know though. It works as expexted - amazing - I just can't see any reason for the Qwen2 models to gain parameters when merged.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using trollek/Qwen2-1.5B-Instruct-Abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Replete-AI/Replete-Coder-Qwen2-1.5b
  - model: M4-ai/Hercules-5.0-Qwen2-1.5B
  - model: cognitivecomputations/dolphin-2.9.3-qwen2-1.5b
merge_method: model_stock
base_model: trollek/Qwen2-1.5B-Instruct-Abliterated
architecture: qwen2
dtype: bfloat16

Quants

Ollama

ollama pull trollek/cleverqwen2:1.5b-q4_k_s
ollama pull trollek/cleverqwen2:1.5b-q5_k_s
ollama pull trollek/cleverqwen2:1.5b-q6_k
Downloads last month
47
GGUF
Model size
1.78B params
Architecture
qwen2

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for trollek/CleverQwen2-1.5B-GGUF