Edit model card

jaLLAbi2-7b

jaLLAbi2-7b is a merge of the following models using mergekit:

🧩 Configuration

```yaml models:

  • model: eren23/ogno-monarch-jaskier-merge-7b

    No parameters necessary for base model

  • model: FelixChao/WestSeverus-7B-DPO-v2 #Emphasize the beginning of Vicuna format models parameters: weight: 0.2 density: 0.59
  • model: bardsai/jaskier-7b-dpo-v5.6 parameters: weight: 0.2 density: 0.55

Vicuna format

  • model: AbacusResearch/haLLAwa3 parameters: weight: 0.3 density: 0.55
  • model: cognitivecomputations/WestLake-7B-v2-laser parameters: weight: 0.3 density: 0.55

merge_method: dare_ties base_model: eren23/ogno-monarch-jaskier-merge-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ```

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.06
AI2 Reasoning Challenge (25-Shot) 71.67
HellaSwag (10-Shot) 88.29
MMLU (5-Shot) 64.92
TruthfulQA (0-shot) 70.16
Winogrande (5-shot) 83.35
GSM8k (5-shot) 71.95
Downloads last month
2,745
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Evaluation results