Edit model card

jaLLAbi

jaLLAbi is a merge of the following models using mergekit:

🧩 Configuration

```yaml slices:

  • sources:
    • model: openchat/openchat-3.5-0106 layer_range: [0, 32]
    • model: machinists/Mistral-7B-SQL layer_range: [0, 32]

merge_method: slerp base_model: openchat/openchat-3.5-0106 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 20.07
AI2 Reasoning Challenge (25-Shot) 22.70
HellaSwag (10-Shot) 25.04
MMLU (5-Shot) 23.12
TruthfulQA (0-shot) 0.00
Winogrande (5-shot) 49.57
GSM8k (5-shot) 0.00
Downloads last month
2,681
Safetensors
Model size
14.4B params
Tensor type
BF16
·

Evaluation results