Psyfighter2-Orca2-ties

Psyfighter2-Orca2-ties is a merge of the following models using mergekit:

This is my very first merge I have ever attempted. The motivation behind this merge is to try and create a 13B version of jebcarter/psyonic-cetacean-20B. I don't have a good GPU (GTX 1660 6GB), so although I can merge the model, I cannot actually run it. However, the Open LLM Leaderboard ranks this merge with 63.48 avg point, which is higher than both KoboldAI/LLaMA2-13B-Psyfighter2 and jebcarter/psyonic-cetacean-20B, so I must did something right. The next step is to quantize this merge into GGUF so I can actually run it with KoboldCpp.

🧩 Configuration

models:
  - model: KoboldAI/LLaMA2-13B-Psyfighter2
  - model: microsoft/Orca-2-13b
    parameters:
      density: 0.40
      weight: [0, 0.3, 0.7, 1]
merge_method: ties
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
parameters:
  normalize: true
  int8_mask: true
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 63.48
AI2 Reasoning Challenge (25-Shot) 62.46
HellaSwag (10-Shot) 81.74
MMLU (5-Shot) 60.31
TruthfulQA (0-shot) 55.40
Winogrande (5-shot) 77.27
GSM8k (5-shot) 43.67
Downloads last month
671
Safetensors
Model size
13B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tuantran1632001/Psyfighter2-Orca2-13B-ties

Merge model
this model
Quantizations
1 model

Evaluation results