Experiments
Collection
7 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit. Experiment of merging top 3 7B models on the OpenLLm leaderboard (as of 5/30/2024)
This model was merged using the DARE TIES merge method using BarraHome/Mistroll-7B-v2.2 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: BarraHome/Mistroll-7B-v2.2
# no parameters necessary for base model
- model: yam-peleg/Experiment26-7B
parameters:
weight: 0.4
density: 0.7
- model: MTSAIR/multi_verse_model
parameters:
weight: 0.6
density: 0.7
merge_method: dare_ties
base_model: BarraHome/Mistroll-7B-v2.2
parameters:
int8_mask: true
dtype: bfloat16
eval coming soon