Apollo🎖️
Collection
Experimental models
•
3 items
•
Updated
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using yam-peleg/Experiment26-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
- model: BarraHome/Mistroll-7B-v2.2
- model: nbeerbower/bophades-mistral-truthy-DPO-7B
- model: jondurbin/bagel-dpo-7b-v0.5
merge_method: model_stock
base_model: yam-peleg/Experiment26-7B
dtype: bfloat16