DavidAU's picture
Update README.md
64392a4 verified
|
raw
history blame
1.81 kB
metadata
base_model: []
library_name: transformers
tags:
  - mergekit
  - merge

MN-magnum-v2.5-18.5B-kto-Instruct

For full details on this model, and GGUFS please visit:

[ https://huggingface.co/DavidAU/MN-Magnum-v2.5-18.5B-kto-Story-Wizard-ED1-Instruct-GGUF ]

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • G:/11B/magnum-v2.5-12b-kto
  • g:/11b/Mistral-Nemo-Instruct-2407-12B

Configuration

The following YAML configuration was used to produce this model:

# SMB with instruct to help performance.

slices:
 - sources:
   - model: g:/11b/Mistral-Nemo-Instruct-2407-12B
     layer_range: [0, 14]
 - sources:
   - model: G:/11B/magnum-v2.5-12b-kto
     layer_range: [8, 24]
     parameters:
       scale:
         - filter: o_proj
           value: 1
         - filter: down_proj
           value: 1
         - value: 1
 - sources:
   - model: g:/11b/Mistral-Nemo-Instruct-2407-12B
     layer_range: [14, 22]
     parameters:
       scale:
         - filter: o_proj
           value: .5
         - filter: down_proj
           value: .5
         - value: 1
 - sources:
   - model: g:/11b/Mistral-Nemo-Instruct-2407-12B
     layer_range: [22, 31]
     parameters:
       scale:
         - filter: o_proj
           value: .75
         - filter: down_proj
           value: .75
         - value: 1
 - sources:
   - model: G:/11B/magnum-v2.5-12b-kto
     layer_range: [24, 40]
     parameters:
       scale:
         - filter: o_proj
           value: 1
         - filter: down_proj
           value: 1
         - value: 1
merge_method: passthrough
dtype: bfloat16