Edit model card

Temperature:
Mistral Nemo likes low temperature between 0.3-0.5

Mistral-Nemo-2407-12B-Estrella-v1

image/png

RP Model. Seems coherent and concise but also creative. Big merge using new DELLA technique.

Prompt Format: Seems best with "Mistral Instruct" but ChatML might also work.

[INST] System Message [/INST]

[INST] Name: Let's get started. Please respond based on the information and instructions provided above. [/INST]

<s>[INST] Name: What is your favourite condiment? [/INST]
AssistantName: Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> 
[INST] Name: Do you have mayonnaise recipes? [/INST]

Thanks mradermacher for the quants


merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged with a multi-step method using the DELLA, DELLA_LINEAR and SLERP merge algorithms.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

#Step 1 (Part1)
models:
  - model: Sao10K/MN-12B-Lyra-v1
    parameters:
      weight: 0.15
      density: 0.77
  - model: shuttleai/shuttle-2.5-mini
    parameters:
      weight: 0.20
      density: 0.78
  - model: anthracite-org/magnum-12b-v2
    parameters:
      weight: 0.35
      density: 0.85
  - model: nothingiisreal/MN-12B-Celeste-V1.9
    parameters:
      weight: 0.55
      density: 0.90
merge_method: della
base_model: Sao10K/MN-12B-Lyra-v1
parameters:
  int8_mask: true
  epsilon: 0.05
  lambda: 1
dtype: bfloat16
#Step 2 (Part2)
models:
  - model: BeaverAI/mistral-doryV2-12b
    parameters:
      weight: 0.10
      density: 0.4
  - model: unsloth/Mistral-Nemo-Instruct-2407
    parameters:
      weight: 0.20
      density: 0.4
  - model: UsernameJustAnother/Nemo-12B-Marlin-v5
    parameters:
      weight: 0.25
      density: 0.5
  - model: invisietch/Atlantis-v0.1-12B
    parameters:
      weight: 0.3
      density: 0.5
  - model: NeverSleep/Lumimaid-v0.2-12B
    parameters:
      weight: 0.4
      density: 0.8
merge_method: della_linear
base_model: anthracite-org/magnum-12b-v2
parameters:
  int8_mask: true
  epsilon: 0.05
  lambda: 1
dtype: bfloat16
#Step 3 (Estrella)
slices:
  - sources:
      - model: v000000/MN-12B-Part2
        layer_range: [0, 40]
      - model: v000000/MN-12B-Part1
        layer_range: [0, 40]
merge_method: slerp
base_model: v000000/MN-12B-Part1
parameters: #smooth gradient prio part1
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 0.6, 0.1, 0.6, 0.3, 0.8, 0.5]
    - filter: mlp
      value: [0, 0.5, 0.4, 0.3, 0, 0.3, 0.4, 0.7, 0.2, 0.5]
    - value: 0.5
dtype: bfloat16
Downloads last month
20
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for v000000/MN-12B-Estrella-v1

Merge model
this model
Quantizations
3 models

Collection including v000000/MN-12B-Estrella-v1