merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: CultriX/Qwen2.5-14B-CoreGeneralist
merge_method: slerp
dtype: bfloat16
parameters:
  # Uniform interpolation: change the values below if you wish to favor one model for certain layer types.
  t:
    - filter: self_attn
      value: 0.5
    - filter: mlp
      value: 0.5
    - value: 0.5
models:
  - model: CultriX/Qwen2.5-14B-CoreGeneralist
  - model: CultriX/Qwen2.5-14B-ReasoningMerge
tokenizer_source: CultriX/Qwen2.5-14B-CoreGeneralist
chat_template: chatml
name: Qwen2.5-14B-CoreReasoning-Slerp
Downloads last month
5
Safetensors
Model size
14.8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for CultriX/Qwen2.5-14B-GeneralReasoning

Space using CultriX/Qwen2.5-14B-GeneralReasoning 1