metadata
base_model:
- ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- AiCloser/Qwen2.5-32B-AGI
- nbeerbower/Dumpling-Qwen2.5-32B-v2
- rinna/qwen2.5-bakeneko-32b-instruct
- Sao10K/32B-Qwen2.5-Kunou-v1
- allura-org/Qwen2.5-32b-RP-Ink
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SCE merge method using TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5 as a base.
Models Merged
The following models were included in the merge:
- ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- AiCloser/Qwen2.5-32B-AGI
- nbeerbower/Dumpling-Qwen2.5-32B-v2
- rinna/qwen2.5-bakeneko-32b-instruct
- Sao10K/32B-Qwen2.5-Kunou-v1
- allura-org/Qwen2.5-32b-RP-Ink
- EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
Configuration
The following YAML configuration was used to produce this model:
base_model: TheSkullery/Q2.5-Hydroblated-R1-32B-v2.5
merge_method: sce
dype: float32
out_dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-32B
Parameters:
select_topk: 0.16
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
- model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
- model: allura-org/Qwen2.5-32b-RP-Ink
- model: Sao10K/32B-Qwen2.5-Kunou-v1
- model: nbeerbower/Dumpling-Qwen2.5-32B-v2
- model: rinna/qwen2.5-bakeneko-32b-instruct
- model: AiCloser/Qwen2.5-32B-AGI