license: apache-2.0 | |
base_model: mistralai/Mistral-7B-Instruct-v0.2 | |
library_name: transformers | |
tags: | |
- mergekit | |
- merge | |
- mistral | |
- not-for-all-audiences | |
# Model Card: ShoriRP-v0.57-merged | |
This is a merge between: | |
- [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | |
- [ShoriRP-v0.57](https://huggingface.co/lemonilia/ShoriRP-v0.57) at a weight of 1.00. | |
The merge was performed using [mergekit](https://github.com/cg123/mergekit). | |
The intended objective was to make a controlled test merge at a weight of 1.00 | |
### Configuration | |
The following YAML configuration was used to produce this model: | |
```yaml | |
merge_method: passthrough | |
models: | |
- model: F:\AI\models\Mistral-7B-Instruct-v0.2+F:\AI\loras\ShoriRP-v0.57 | |
dtype: float16 | |
``` | |
## Usage | |
Please see the [Lora repository](https://huggingface.co/lemonilia/ShoriRP-v0.57#prompting-format) for proper usage. All the prompt formatting JSONs are included in this repo for your convenience. | |
## Bias, Risks, and Limitations | |
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. | |
## Training Details | |
This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details. | |