icefog72's picture
Update README.md
e52fa47 verified
|
raw
history blame
1.67 kB
metadata
base_model:
  - icefog72/Kunokukulemonchini-32k-7b
  - icefog72/Mixtral_AI_Cyber_3.m1-BigL
  - LeroyDyer/Mixtral_AI_Cyber_3.m1
  - Undi95/BigL-7B
library_name: transformers
tags:
  - mergekit
  - merge
  - alpaca
  - mistral
  - not-for-all-audiences
  - nsfw
license: cc-by-nc-4.0

IceLemonTeaRP-32k-7b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Cooked merge from fresh ingredients to fix icefog72/IceTeaRP-7b repetition problems.

Prompt template: Alpaca, maybe ChatML

measurement.json for quanting exl2 included.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: Mixtral_AI_Cyber_3.m1-BigL
        layer_range: [0, 32]
      - model: Kunokukulemonchini-32k-7b
        layer_range: [0, 32]
merge_method: slerp
base_model: Kunokukulemonchini-32k-7b
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16