--- library_name: transformers tags: - mergekit - merge - mistral - roleplay --- # Info Remake of v4 with the new merge method. Very intersting model, works well with smooth sampling 0.25 and minP 0.075 ChatML and Alpaca # Irene-RP-v5-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /mnt/2TB/Models/Sharded/Mistral-7B-v0.2-hf-sharded as a base. ### Models Merged The following models were included in the merge: * [l3utterfly/mistral-7b-v0.2-layla-v4](https://huggingface.co/l3utterfly/mistral-7b-v0.2-layla-v4) * [Locutusque/Hercules-4.0-Mistral-v0.2-7B](https://huggingface.co/Locutusque/Hercules-4.0-Mistral-v0.2-7B) * [Weyaxi/Einstein-v5-v0.2-7B](https://huggingface.co/Weyaxi/Einstein-v5-v0.2-7B) * [Virt-io/Irene-RP-v3-7B](https://huggingface.co/Virt-io/Irene-RP-v3-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: alpindale/Mistral-7B-v0.2-hf - model: l3utterfly/mistral-7b-v0.2-layla-v4 - model: mergekit/Hercules_Einstein_MODELSTOCK - model: Virt-io/Irene-RP-v3-7B merge_method: model_stock base_model: alpindale/Mistral-7B-v0.2-hf dtype: float16 ```