--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences --- # Model Card: ShoriRP-merged This is a merge between: - [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) - [ShoriRP](https://huggingface.co/lemonilia) at a weight of 1.00. The author uploads different versions, so the link goes to the author's page. The merge was performed using [mergekit](https://github.com/cg123/mergekit). The intended objective was to make a controlled test merge at a weight of 1.00 ### Branches Since this is a test model, each version is located on a separate branch: - v0.57: Version 0.57 - v0.60: Version 0.60 ### Configuration The following YAML configuration was used to produce this model (x is the version of ShoriRP that's used): ```yaml merge_method: passthrough models: - model: F:\AI\models\Mistral-7B-Instruct-v0.2+F:\AI\loras\ShoriRP-vx dtype: float16 ``` ## Usage Please see the [Lora repository](https://huggingface.co/lemonilia) for proper usage. All the prompt formatting JSONs are included in this repo for your convenience. ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.