--- base_model: - rishiraj/smol-7b - FuseAI/OpenChat-3.5-7B-Mixtral - openchat/openchat_3.5 - berkeley-nest/Starling-LM-7B-alpha - FuseAI/OpenChat-3.5-7B-Solar library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) as a base. ### Models Merged The following models were included in the merge: * [rishiraj/smol-7b](https://huggingface.co/rishiraj/smol-7b) * [FuseAI/OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral) * [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) * [FuseAI/OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: openchat/openchat_3.5 - model: FuseAI/OpenChat-3.5-7B-Mixtral - model: FuseAI/OpenChat-3.5-7B-Solar - model: berkeley-nest/Starling-LM-7B-alpha - model: rishiraj/smol-7b merge_method: model_stock base_model: openchat/openchat_3.5 dtype: bfloat16 ```