Jett-w26 / README.md
giannisan's picture
Update README.md
f6dfa9e verified
|
raw
history blame
2.05 kB
metadata
base_model:
  - chihoonlee10/T3Q-Mistral-Orca-Math-DPO
  - senseable/WestLake-7B-v2
  - S-miguel/The-Trinity-Coder-7B
  - yam-peleg/Experiment26-7B
  - InferenceIllusionist/Excalibur-7b-DPO
  - Kukedlc/Jupiter-k-7B-slerp
library_name: transformers
tags:
  - mergekit
  - merge

Jett-26

image/webp

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using yam-peleg/Experiment26-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: yam-peleg/Experiment26-7B
    # No parameters necessary for base model
  - model: Kukedlc/Jupiter-k-7B-slerp
    parameters:
      density: 0.58
      weight: 0.25
  - model: S-miguel/The-Trinity-Coder-7B
    parameters:
      density: 0.6
      weight: 0.20
  - model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
    parameters:
      density: 0.6
      weight: 0.20
  - model: senseable/WestLake-7B-v2
    parameters:
      density: 0.56
      weight: 0.20
  - model: InferenceIllusionist/Excalibur-7b-DPO
    parameters:
      density: 0.58
      weight: 0.15
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16