Another experiment in the line of loyal-piano-m7.

Steps taken to produce this model:

  • Train loyal-piano-m7
  • cDPO with HuggingFaceH4/ultrafeedback_binarized to produce loyal-piano-m7-cdpo
  • Train another model with different sampling of the same source datasets as loyal-piano, let's call it servile-harpsichord
  • cDPO servile-harpsichord with allenai/ultrafeedback_binarized_cleaned, Intel/orca_dpo_pairs, and a helpfulness-only version of PKU-Alignment/PKU-SafeRLHF
  • TIES merge several checkpoints of servile-harpsichord-cdpo with loyal-piano-m7-cdpo

Local benchmarks show the result to be better than any of the individual components. Let's see if that holds up!

Trained using the Alpaca prompt format.

Configuration for final merge:

models:
  - model: chargoddard/loyal-piano-m7-cdpo
    parameters:
      density: 1.0
      weight: 1.0
  - model: /home/ubuntu/servile-harpsichord-cdpo/checkpoint-4186
    parameters:
      weight: 0.1
  - model: /home/ubuntu/servile-harpsichord-cdpo/checkpoint-5796
    parameters:
      weight: 0.2
  - model: /home/ubuntu/servile-harpsichord-cdpo/checkpoint-6118
    parameters:
      weight: 0.3
  - model: /home/ubuntu/servile-harpsichord-cdpo/final
    parameters:
      weight: 0.4
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
parameters:
  density: 0.4
  normalize: true
  int8_mask: true
Downloads last month
1,101
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chargoddard/piano-medley-7b

Merges
1 model
Quantizations
2 models

Datasets used to train chargoddard/piano-medley-7b