Edit model card

TriFusionNexus-7b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using CultriX/NeuralTrix-7B-dpo as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: CultriX/NeuralTrix-7B-dpo  # no parameters necessary for base model
  - model: mlabonne/AlphaMonarch-7B
    parameters:
      density: 0.5  # fraction of weights in differences from the base model to retain
      weight:   # weight gradient
        - filter: mlp
          value: 0.5
        - value: 0
  - model: bardsai/jaskier-7b-dpo-v5.6
    parameters:
      density: 0.5
      weight: 0.5
merge_method: ties
base_model: CultriX/NeuralTrix-7B-dpo
parameters:
  normalize: true
  int8_mask: true
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.32
AI2 Reasoning Challenge (25-Shot) 72.78
HellaSwag (10-Shot) 89.17
MMLU (5-Shot) 64.44
TruthfulQA (0-shot) 78.13
Winogrande (5-shot) 84.93
GSM8k (5-shot) 68.46
Downloads last month
543
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Merge of

Evaluation results