You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using unsloth/Mistral-Small-Instruct-2409 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: anthracite-org/magnum-v4-22b
    parameters:
      weight: 1.0         # Primary model for human-like writing
      density: 0.88       # Strong foundation with room for blending
  - model: TheDrummer/Cydonia-22B-v1.3
    parameters:
      weight: 0.27        # Balanced for creative flair
      density: 0.71       # Subtle creativity with strong coherence
  - model: TheDrummer/Cydonia-22B-v1.2
    parameters:
      weight: 0.17        # Light creativity for nuanced diversity
      density: 0.68       # Maintains alignment with overarching structure
  - model: TheDrummer/Cydonia-22B-v1.1
    parameters:
      weight: 0.2         # Adds depth to accurate and specific nuances
      density: 0.69       # Smoothly integrates details without overwhelming
  - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
    parameters:
      weight: 0.3         # Refined for deeper storytelling and RP focus
      density: 0.78       # Supports narrative without clashing
  - model: allura-org/MS-Meadowlark-22B
    parameters:
      weight: 0.29        # Balances creativity with structured fluency
      density: 0.72       # Enhances clarity and descriptive depth
  - model: spow12/ChatWaifu_v2.0_22B
    parameters:
      weight: 0.27        # Maintains anime-style RP and conversational tone
      density: 0.7        # Intact for balanced integration
  - model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B
    parameters:
      weight: 0.19        # Specialized for Japanese contexts
      density: 0.58       # Ensures contextual accuracy without overlap
  - model: crestf411/MS-sunfall-v0.7.0
    parameters:
      weight: 0.26        # Enhanced for impactful dramatic storytelling
      density: 0.74       # Balances spicy narratives with other models
  - model: unsloth/Mistral-Small-Instruct-2409+rAIfle/Acolyte-LORA
    parameters:
      weight: 0.25        # Balanced for varied structured content
      density: 0.71       # Ensures seamless alignment with base
  - model: InferenceIllusionist/SorcererLM-22B
    parameters:
      weight: 0.22        # Stylized refinement for cohesive outputs
      density: 0.73       # Keeps stylistic diversity in balance
  - model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora
    parameters:
      weight: 0.24        # Mythical storytelling integration
      density: 0.71       # Balanced for smooth interaction
  - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
    parameters:
      weight: 0.1         # Light touch to avoid excessive RP influence
      density: 0.64       # Fine-tuned for roleplay-specific elements
  - model: byroneverson/Mistral-Small-Instruct-2409-abliterated
    parameters:
      weight: 0.16        # Adds raw and unfiltered context nuance
      density: 0.69       # Supports diverse content without overpowering

merge_method: dare_ties  # Best for diverse and complex model blending
base_model: unsloth/Mistral-Small-Instruct-2409
parameters:
  density: 0.85          # Overall density ensures logical and creative balance
  epsilon: 0.08          # Reduced for smoother model interpolation
  lambda: 1.23           # Balanced scaling for crisp and coherent outputs
dtype: bfloat16
Downloads last month
0
Safetensors
Model size
22.2B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Kaoeiri/MS-Magpantheonsel-lark-v4x1.6.2RP-Cydonia-vXXX-22B-7.1