Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Chuluun-Qwen2.5-32B-v0.01

image/png

This is a merge of pre-trained language models created using mergekit.

This merge is largely the same datasets that went into the 72B v0.01, but since Tess and Magnum aren't available as TQ2.5 32B I substituted Rombos instead as the base model and ArliAI's RPMax for Magnum. Testers have reported a similar experience to the 72B, which is high praise indeed for a model half the size. Q4_K_S or equivalent BPW is extremely usable with good context on a single 24GB card.

I don't do v1 releases because of just how quickly LLMs and the scene move, and as a rule one model may or may not be better than another for what and how you write. 32B is a stronger RP model than storywriter but that's to be expected from a mid-size model.

There's some debate as to how much Rombos adds to the mix compared to base Qwen, or even the abliterated versions. Since the goal of Chuluun is to blend uncensored intelligence with strong storywriting/eRP capabilities I am open to suggestions for good base models that might do this (a Tess or Athene or even a Dolphin built off of TQ2.5 would be sweet).

Konnect's Qwenception presets are a good starting point for this model. If the model randomly breaks into Chinese, consider adding TopK of 200 to your samplers. ChatML prompt formatting.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using rombodawg/Rombos-LLM-V2.5-Qwen-32b as a base.

Models Merged

The following models were included in the merge:

  • EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
  • ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
  • Sao10K/32B-Qwen2.5-Kunou-v1

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
  - model: ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3
  - model: Sao10K/32B-Qwen2.5-Kunou-v1
merge_method: model_stock
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
parameters:
  filter_wise: false
dtype: bfloat16
name: DatToad/Chuluun-Qwen2.5-32B-v0.01

Thank Yous!

Credit as always to the original model makers, as well as to Allura-org (now my org, omgthankyou!) for all their support, and also to the testers in the ArliAI Discord for their suggestions and feedback.

Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ReadyArt/Chuluun-Qwen2.5-32B-v0.01_EXL2_4.5bpw_H8

Collection including ReadyArt/Chuluun-Qwen2.5-32B-v0.01_EXL2_4.5bpw_H8