--- base_model: - BarraHome/Mistroll-7B-v2.2 - yam-peleg/Experiment26-7B - nbeerbower/bophades-mistral-truthy-DPO-7B - MaziyarPanahi/Calme-7B-Instruct-v0.9 - jondurbin/bagel-dpo-7b-v0.5 library_name: transformers tags: - mergekit - merge license: apache-2.0 language: - en --- # Check out the fine-tuned version: https://huggingface.co/NotAiLOL/Apollo-7b-orpo-Experimental # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base. ### Models Merged The following models were included in the merge: * [BarraHome/Mistroll-7B-v2.2](https://huggingface.co/BarraHome/Mistroll-7B-v2.2) * [nbeerbower/bophades-mistral-truthy-DPO-7B](https://huggingface.co/nbeerbower/bophades-mistral-truthy-DPO-7B) * [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9) * [jondurbin/bagel-dpo-7b-v0.5](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.5) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: MaziyarPanahi/Calme-7B-Instruct-v0.9 - model: BarraHome/Mistroll-7B-v2.2 - model: nbeerbower/bophades-mistral-truthy-DPO-7B - model: jondurbin/bagel-dpo-7b-v0.5 merge_method: model_stock base_model: yam-peleg/Experiment26-7B dtype: bfloat16 ```