Edit model card

Yugo45-GPT *(7b)

This Yugo45-GPT (7b) model has been fine-tuned on the Alpaca dataset using the gordicaleksa/YugoGPT as the zero ground base model.

Yugo45-GPT is a merge of the following models using LazyMergekit:

πŸ“Œ Note

Special thanks for idea Stopwolf and this X post @TheStopwolf

🧩 Configuration

slices:
  - sources:
      - model: datatab/YugoGPT-Alpaca-v1
        layer_range: [0, 32]
      - model: FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
        layer_range: [0, 32]
merge_method: slerp
base_model: datatab/YugoGPT-Alpaca-v1
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

πŸ‹πŸΌ Benchmarks

# TBD

πŸ’» Usage

# TBD
Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·

Merge of

Dataset used to train datatab/Yugo45-GPT