Mistral-7B
Collection
5 items
β’
Updated
β’
1
This is a merge of pre-trained language models created using mergekit.
GGUF quants:
This model was merged using the following methods:
The following models were included in the merge:
The following YAML configurations were used to produce this model:
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
name: Mistral-11B
---
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0, 24]
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [8, 24]
- sources:
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [24, 32]
merge_method: passthrough
dtype: float16
name: Big-Lemon-Cookie-11B
---
models:
- model: Big-Lemon-Cookie-11B
parameters:
weight: 0.85
- model: Sao10K/Fimbulvetr-11B-v2
parameters:
weight: 0.15
merge_method: task_arithmetic
base_model: Mistral-11B
dtype: float16
name: Chunky-Lemon-Cookie-11B
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 70.23 |
AI2 Reasoning Challenge (25-Shot) | 69.62 |
HellaSwag (10-Shot) | 86.55 |
MMLU (5-Shot) | 65.35 |
TruthfulQA (0-shot) | 61.59 |
Winogrande (5-shot) | 79.79 |
GSM8k (5-shot) | 58.45 |